Nov 29 05:35:50 localhost kernel: Linux version 5.14.0-642.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-68.el9) #1 SMP PREEMPT_DYNAMIC Thu Nov 20 14:15:03 UTC 2025
Nov 29 05:35:50 localhost kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Nov 29 05:35:50 localhost kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64 root=UUID=b277050f-8ace-464d-abb6-4c46d4c45253 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 29 05:35:50 localhost kernel: BIOS-provided physical RAM map:
Nov 29 05:35:50 localhost kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Nov 29 05:35:50 localhost kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Nov 29 05:35:50 localhost kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Nov 29 05:35:50 localhost kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Nov 29 05:35:50 localhost kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Nov 29 05:35:50 localhost kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Nov 29 05:35:50 localhost kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Nov 29 05:35:50 localhost kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Nov 29 05:35:50 localhost kernel: NX (Execute Disable) protection: active
Nov 29 05:35:50 localhost kernel: APIC: Static calls initialized
Nov 29 05:35:50 localhost kernel: SMBIOS 2.8 present.
Nov 29 05:35:50 localhost kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Nov 29 05:35:50 localhost kernel: Hypervisor detected: KVM
Nov 29 05:35:50 localhost kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Nov 29 05:35:50 localhost kernel: kvm-clock: using sched offset of 3254755171 cycles
Nov 29 05:35:50 localhost kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Nov 29 05:35:50 localhost kernel: tsc: Detected 2800.000 MHz processor
Nov 29 05:35:50 localhost kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Nov 29 05:35:50 localhost kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Nov 29 05:35:50 localhost kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Nov 29 05:35:50 localhost kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Nov 29 05:35:50 localhost kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Nov 29 05:35:50 localhost kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Nov 29 05:35:50 localhost kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Nov 29 05:35:50 localhost kernel: Using GB pages for direct mapping
Nov 29 05:35:50 localhost kernel: RAMDISK: [mem 0x2d83a000-0x32c14fff]
Nov 29 05:35:50 localhost kernel: ACPI: Early table checksum verification disabled
Nov 29 05:35:50 localhost kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Nov 29 05:35:50 localhost kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 29 05:35:50 localhost kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 29 05:35:50 localhost kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 29 05:35:50 localhost kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Nov 29 05:35:50 localhost kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 29 05:35:50 localhost kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 29 05:35:50 localhost kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Nov 29 05:35:50 localhost kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Nov 29 05:35:50 localhost kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Nov 29 05:35:50 localhost kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Nov 29 05:35:50 localhost kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Nov 29 05:35:50 localhost kernel: No NUMA configuration found
Nov 29 05:35:50 localhost kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Nov 29 05:35:50 localhost kernel: NODE_DATA(0) allocated [mem 0x23ffd5000-0x23fffffff]
Nov 29 05:35:50 localhost kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Nov 29 05:35:50 localhost kernel: Zone ranges:
Nov 29 05:35:50 localhost kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Nov 29 05:35:50 localhost kernel:   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Nov 29 05:35:50 localhost kernel:   Normal   [mem 0x0000000100000000-0x000000023fffffff]
Nov 29 05:35:50 localhost kernel:   Device   empty
Nov 29 05:35:50 localhost kernel: Movable zone start for each node
Nov 29 05:35:50 localhost kernel: Early memory node ranges
Nov 29 05:35:50 localhost kernel:   node   0: [mem 0x0000000000001000-0x000000000009efff]
Nov 29 05:35:50 localhost kernel:   node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Nov 29 05:35:50 localhost kernel:   node   0: [mem 0x0000000100000000-0x000000023fffffff]
Nov 29 05:35:50 localhost kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Nov 29 05:35:50 localhost kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Nov 29 05:35:50 localhost kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Nov 29 05:35:50 localhost kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Nov 29 05:35:50 localhost kernel: ACPI: PM-Timer IO Port: 0x608
Nov 29 05:35:50 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Nov 29 05:35:50 localhost kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Nov 29 05:35:50 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Nov 29 05:35:50 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Nov 29 05:35:50 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Nov 29 05:35:50 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Nov 29 05:35:50 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Nov 29 05:35:50 localhost kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Nov 29 05:35:50 localhost kernel: TSC deadline timer available
Nov 29 05:35:50 localhost kernel: CPU topo: Max. logical packages:   8
Nov 29 05:35:50 localhost kernel: CPU topo: Max. logical dies:       8
Nov 29 05:35:50 localhost kernel: CPU topo: Max. dies per package:   1
Nov 29 05:35:50 localhost kernel: CPU topo: Max. threads per core:   1
Nov 29 05:35:50 localhost kernel: CPU topo: Num. cores per package:     1
Nov 29 05:35:50 localhost kernel: CPU topo: Num. threads per package:   1
Nov 29 05:35:50 localhost kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Nov 29 05:35:50 localhost kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Nov 29 05:35:50 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Nov 29 05:35:50 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Nov 29 05:35:50 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Nov 29 05:35:50 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Nov 29 05:35:50 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Nov 29 05:35:50 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Nov 29 05:35:50 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Nov 29 05:35:50 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Nov 29 05:35:50 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Nov 29 05:35:50 localhost kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Nov 29 05:35:50 localhost kernel: Booting paravirtualized kernel on KVM
Nov 29 05:35:50 localhost kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Nov 29 05:35:50 localhost kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Nov 29 05:35:50 localhost kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Nov 29 05:35:50 localhost kernel: pcpu-alloc: s225280 r8192 d28672 u262144 alloc=1*2097152
Nov 29 05:35:50 localhost kernel: pcpu-alloc: [0] 0 1 2 3 4 5 6 7 
Nov 29 05:35:50 localhost kernel: kvm-guest: PV spinlocks disabled, no host support
Nov 29 05:35:50 localhost kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64 root=UUID=b277050f-8ace-464d-abb6-4c46d4c45253 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 29 05:35:50 localhost kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64", will be passed to user space.
Nov 29 05:35:50 localhost kernel: random: crng init done
Nov 29 05:35:50 localhost kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Nov 29 05:35:50 localhost kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Nov 29 05:35:50 localhost kernel: Fallback order for Node 0: 0 
Nov 29 05:35:50 localhost kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Nov 29 05:35:50 localhost kernel: Policy zone: Normal
Nov 29 05:35:50 localhost kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Nov 29 05:35:50 localhost kernel: software IO TLB: area num 8.
Nov 29 05:35:50 localhost kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Nov 29 05:35:50 localhost kernel: ftrace: allocating 49313 entries in 193 pages
Nov 29 05:35:50 localhost kernel: ftrace: allocated 193 pages with 3 groups
Nov 29 05:35:50 localhost kernel: Dynamic Preempt: voluntary
Nov 29 05:35:50 localhost kernel: rcu: Preemptible hierarchical RCU implementation.
Nov 29 05:35:50 localhost kernel: rcu:         RCU event tracing is enabled.
Nov 29 05:35:50 localhost kernel: rcu:         RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Nov 29 05:35:50 localhost kernel:         Trampoline variant of Tasks RCU enabled.
Nov 29 05:35:50 localhost kernel:         Rude variant of Tasks RCU enabled.
Nov 29 05:35:50 localhost kernel:         Tracing variant of Tasks RCU enabled.
Nov 29 05:35:50 localhost kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Nov 29 05:35:50 localhost kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Nov 29 05:35:50 localhost kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Nov 29 05:35:50 localhost kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Nov 29 05:35:50 localhost kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Nov 29 05:35:50 localhost kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Nov 29 05:35:50 localhost kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Nov 29 05:35:50 localhost kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Nov 29 05:35:50 localhost kernel: Console: colour VGA+ 80x25
Nov 29 05:35:50 localhost kernel: printk: console [ttyS0] enabled
Nov 29 05:35:50 localhost kernel: ACPI: Core revision 20230331
Nov 29 05:35:50 localhost kernel: APIC: Switch to symmetric I/O mode setup
Nov 29 05:35:50 localhost kernel: x2apic enabled
Nov 29 05:35:50 localhost kernel: APIC: Switched APIC routing to: physical x2apic
Nov 29 05:35:50 localhost kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Nov 29 05:35:50 localhost kernel: Calibrating delay loop (skipped) preset value.. 5600.00 BogoMIPS (lpj=2800000)
Nov 29 05:35:50 localhost kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Nov 29 05:35:50 localhost kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Nov 29 05:35:50 localhost kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Nov 29 05:35:50 localhost kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Nov 29 05:35:50 localhost kernel: Spectre V2 : Mitigation: Retpolines
Nov 29 05:35:50 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Nov 29 05:35:50 localhost kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Nov 29 05:35:50 localhost kernel: RETBleed: Mitigation: untrained return thunk
Nov 29 05:35:50 localhost kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Nov 29 05:35:50 localhost kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Nov 29 05:35:50 localhost kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Nov 29 05:35:50 localhost kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Nov 29 05:35:50 localhost kernel: x86/bugs: return thunk changed
Nov 29 05:35:50 localhost kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Nov 29 05:35:50 localhost kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Nov 29 05:35:50 localhost kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Nov 29 05:35:50 localhost kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Nov 29 05:35:50 localhost kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Nov 29 05:35:50 localhost kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Nov 29 05:35:50 localhost kernel: Freeing SMP alternatives memory: 40K
Nov 29 05:35:50 localhost kernel: pid_max: default: 32768 minimum: 301
Nov 29 05:35:50 localhost kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Nov 29 05:35:50 localhost kernel: landlock: Up and running.
Nov 29 05:35:50 localhost kernel: Yama: becoming mindful.
Nov 29 05:35:50 localhost kernel: SELinux:  Initializing.
Nov 29 05:35:50 localhost kernel: LSM support for eBPF active
Nov 29 05:35:50 localhost kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Nov 29 05:35:50 localhost kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Nov 29 05:35:50 localhost kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Nov 29 05:35:50 localhost kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Nov 29 05:35:50 localhost kernel: ... version:                0
Nov 29 05:35:50 localhost kernel: ... bit width:              48
Nov 29 05:35:50 localhost kernel: ... generic registers:      6
Nov 29 05:35:50 localhost kernel: ... value mask:             0000ffffffffffff
Nov 29 05:35:50 localhost kernel: ... max period:             00007fffffffffff
Nov 29 05:35:50 localhost kernel: ... fixed-purpose events:   0
Nov 29 05:35:50 localhost kernel: ... event mask:             000000000000003f
Nov 29 05:35:50 localhost kernel: signal: max sigframe size: 1776
Nov 29 05:35:50 localhost kernel: rcu: Hierarchical SRCU implementation.
Nov 29 05:35:50 localhost kernel: rcu:         Max phase no-delay instances is 400.
Nov 29 05:35:50 localhost kernel: smp: Bringing up secondary CPUs ...
Nov 29 05:35:50 localhost kernel: smpboot: x86: Booting SMP configuration:
Nov 29 05:35:50 localhost kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Nov 29 05:35:50 localhost kernel: smp: Brought up 1 node, 8 CPUs
Nov 29 05:35:50 localhost kernel: smpboot: Total of 8 processors activated (44800.00 BogoMIPS)
Nov 29 05:35:50 localhost kernel: node 0 deferred pages initialised in 9ms
Nov 29 05:35:50 localhost kernel: Memory: 7765920K/8388068K available (16384K kernel code, 5787K rwdata, 13900K rodata, 4192K init, 7172K bss, 616268K reserved, 0K cma-reserved)
Nov 29 05:35:50 localhost kernel: devtmpfs: initialized
Nov 29 05:35:50 localhost kernel: x86/mm: Memory block size: 128MB
Nov 29 05:35:50 localhost kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Nov 29 05:35:50 localhost kernel: futex hash table entries: 2048 (order: 5, 131072 bytes, linear)
Nov 29 05:35:50 localhost kernel: pinctrl core: initialized pinctrl subsystem
Nov 29 05:35:50 localhost kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Nov 29 05:35:50 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Nov 29 05:35:50 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Nov 29 05:35:50 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Nov 29 05:35:50 localhost kernel: audit: initializing netlink subsys (disabled)
Nov 29 05:35:50 localhost kernel: audit: type=2000 audit(1764394549.049:1): state=initialized audit_enabled=0 res=1
Nov 29 05:35:50 localhost kernel: thermal_sys: Registered thermal governor 'fair_share'
Nov 29 05:35:50 localhost kernel: thermal_sys: Registered thermal governor 'step_wise'
Nov 29 05:35:50 localhost kernel: thermal_sys: Registered thermal governor 'user_space'
Nov 29 05:35:50 localhost kernel: cpuidle: using governor menu
Nov 29 05:35:50 localhost kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Nov 29 05:35:50 localhost kernel: PCI: Using configuration type 1 for base access
Nov 29 05:35:50 localhost kernel: PCI: Using configuration type 1 for extended access
Nov 29 05:35:50 localhost kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Nov 29 05:35:50 localhost kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Nov 29 05:35:50 localhost kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Nov 29 05:35:50 localhost kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Nov 29 05:35:50 localhost kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Nov 29 05:35:50 localhost kernel: Demotion targets for Node 0: null
Nov 29 05:35:50 localhost kernel: cryptd: max_cpu_qlen set to 1000
Nov 29 05:35:50 localhost kernel: ACPI: Added _OSI(Module Device)
Nov 29 05:35:50 localhost kernel: ACPI: Added _OSI(Processor Device)
Nov 29 05:35:50 localhost kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Nov 29 05:35:50 localhost kernel: ACPI: Added _OSI(Processor Aggregator Device)
Nov 29 05:35:50 localhost kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Nov 29 05:35:50 localhost kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Nov 29 05:35:50 localhost kernel: ACPI: Interpreter enabled
Nov 29 05:35:50 localhost kernel: ACPI: PM: (supports S0 S3 S4 S5)
Nov 29 05:35:50 localhost kernel: ACPI: Using IOAPIC for interrupt routing
Nov 29 05:35:50 localhost kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Nov 29 05:35:50 localhost kernel: PCI: Using E820 reservations for host bridge windows
Nov 29 05:35:50 localhost kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Nov 29 05:35:50 localhost kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Nov 29 05:35:50 localhost kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Nov 29 05:35:50 localhost kernel: acpiphp: Slot [3] registered
Nov 29 05:35:50 localhost kernel: acpiphp: Slot [4] registered
Nov 29 05:35:50 localhost kernel: acpiphp: Slot [5] registered
Nov 29 05:35:50 localhost kernel: acpiphp: Slot [6] registered
Nov 29 05:35:50 localhost kernel: acpiphp: Slot [7] registered
Nov 29 05:35:50 localhost kernel: acpiphp: Slot [8] registered
Nov 29 05:35:50 localhost kernel: acpiphp: Slot [9] registered
Nov 29 05:35:50 localhost kernel: acpiphp: Slot [10] registered
Nov 29 05:35:50 localhost kernel: acpiphp: Slot [11] registered
Nov 29 05:35:50 localhost kernel: acpiphp: Slot [12] registered
Nov 29 05:35:50 localhost kernel: acpiphp: Slot [13] registered
Nov 29 05:35:50 localhost kernel: acpiphp: Slot [14] registered
Nov 29 05:35:50 localhost kernel: acpiphp: Slot [15] registered
Nov 29 05:35:50 localhost kernel: acpiphp: Slot [16] registered
Nov 29 05:35:50 localhost kernel: acpiphp: Slot [17] registered
Nov 29 05:35:50 localhost kernel: acpiphp: Slot [18] registered
Nov 29 05:35:50 localhost kernel: acpiphp: Slot [19] registered
Nov 29 05:35:50 localhost kernel: acpiphp: Slot [20] registered
Nov 29 05:35:50 localhost kernel: acpiphp: Slot [21] registered
Nov 29 05:35:50 localhost kernel: acpiphp: Slot [22] registered
Nov 29 05:35:50 localhost kernel: acpiphp: Slot [23] registered
Nov 29 05:35:50 localhost kernel: acpiphp: Slot [24] registered
Nov 29 05:35:50 localhost kernel: acpiphp: Slot [25] registered
Nov 29 05:35:50 localhost kernel: acpiphp: Slot [26] registered
Nov 29 05:35:50 localhost kernel: acpiphp: Slot [27] registered
Nov 29 05:35:50 localhost kernel: acpiphp: Slot [28] registered
Nov 29 05:35:50 localhost kernel: acpiphp: Slot [29] registered
Nov 29 05:35:50 localhost kernel: acpiphp: Slot [30] registered
Nov 29 05:35:50 localhost kernel: acpiphp: Slot [31] registered
Nov 29 05:35:50 localhost kernel: PCI host bridge to bus 0000:00
Nov 29 05:35:50 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Nov 29 05:35:50 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Nov 29 05:35:50 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Nov 29 05:35:50 localhost kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Nov 29 05:35:50 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Nov 29 05:35:50 localhost kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Nov 29 05:35:50 localhost kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Nov 29 05:35:50 localhost kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Nov 29 05:35:50 localhost kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Nov 29 05:35:50 localhost kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Nov 29 05:35:50 localhost kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Nov 29 05:35:50 localhost kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Nov 29 05:35:50 localhost kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Nov 29 05:35:50 localhost kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Nov 29 05:35:50 localhost kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Nov 29 05:35:50 localhost kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Nov 29 05:35:50 localhost kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Nov 29 05:35:50 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Nov 29 05:35:50 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Nov 29 05:35:50 localhost kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Nov 29 05:35:50 localhost kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Nov 29 05:35:50 localhost kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Nov 29 05:35:50 localhost kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Nov 29 05:35:50 localhost kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Nov 29 05:35:50 localhost kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Nov 29 05:35:50 localhost kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Nov 29 05:35:50 localhost kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Nov 29 05:35:50 localhost kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Nov 29 05:35:50 localhost kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Nov 29 05:35:50 localhost kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Nov 29 05:35:50 localhost kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Nov 29 05:35:50 localhost kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Nov 29 05:35:50 localhost kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Nov 29 05:35:50 localhost kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Nov 29 05:35:50 localhost kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Nov 29 05:35:50 localhost kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Nov 29 05:35:50 localhost kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Nov 29 05:35:50 localhost kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Nov 29 05:35:50 localhost kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Nov 29 05:35:50 localhost kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Nov 29 05:35:50 localhost kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Nov 29 05:35:50 localhost kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Nov 29 05:35:50 localhost kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Nov 29 05:35:50 localhost kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Nov 29 05:35:50 localhost kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Nov 29 05:35:50 localhost kernel: iommu: Default domain type: Translated
Nov 29 05:35:50 localhost kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Nov 29 05:35:50 localhost kernel: SCSI subsystem initialized
Nov 29 05:35:50 localhost kernel: ACPI: bus type USB registered
Nov 29 05:35:50 localhost kernel: usbcore: registered new interface driver usbfs
Nov 29 05:35:50 localhost kernel: usbcore: registered new interface driver hub
Nov 29 05:35:50 localhost kernel: usbcore: registered new device driver usb
Nov 29 05:35:50 localhost kernel: pps_core: LinuxPPS API ver. 1 registered
Nov 29 05:35:50 localhost kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Nov 29 05:35:50 localhost kernel: PTP clock support registered
Nov 29 05:35:50 localhost kernel: EDAC MC: Ver: 3.0.0
Nov 29 05:35:50 localhost kernel: NetLabel: Initializing
Nov 29 05:35:50 localhost kernel: NetLabel:  domain hash size = 128
Nov 29 05:35:50 localhost kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Nov 29 05:35:50 localhost kernel: NetLabel:  unlabeled traffic allowed by default
Nov 29 05:35:50 localhost kernel: PCI: Using ACPI for IRQ routing
Nov 29 05:35:50 localhost kernel: PCI: pci_cache_line_size set to 64 bytes
Nov 29 05:35:50 localhost kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
Nov 29 05:35:50 localhost kernel: e820: reserve RAM buffer [mem 0xbffdb000-0xbfffffff]
Nov 29 05:35:50 localhost kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Nov 29 05:35:50 localhost kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Nov 29 05:35:50 localhost kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Nov 29 05:35:50 localhost kernel: vgaarb: loaded
Nov 29 05:35:50 localhost kernel: clocksource: Switched to clocksource kvm-clock
Nov 29 05:35:50 localhost kernel: VFS: Disk quotas dquot_6.6.0
Nov 29 05:35:50 localhost kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Nov 29 05:35:50 localhost kernel: pnp: PnP ACPI init
Nov 29 05:35:50 localhost kernel: pnp 00:03: [dma 2]
Nov 29 05:35:50 localhost kernel: pnp: PnP ACPI: found 5 devices
Nov 29 05:35:50 localhost kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Nov 29 05:35:50 localhost kernel: NET: Registered PF_INET protocol family
Nov 29 05:35:50 localhost kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Nov 29 05:35:50 localhost kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Nov 29 05:35:50 localhost kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Nov 29 05:35:50 localhost kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Nov 29 05:35:50 localhost kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Nov 29 05:35:50 localhost kernel: TCP: Hash tables configured (established 65536 bind 65536)
Nov 29 05:35:50 localhost kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Nov 29 05:35:50 localhost kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Nov 29 05:35:50 localhost kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Nov 29 05:35:50 localhost kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Nov 29 05:35:50 localhost kernel: NET: Registered PF_XDP protocol family
Nov 29 05:35:50 localhost kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Nov 29 05:35:50 localhost kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Nov 29 05:35:50 localhost kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Nov 29 05:35:50 localhost kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Nov 29 05:35:50 localhost kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Nov 29 05:35:50 localhost kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Nov 29 05:35:50 localhost kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Nov 29 05:35:50 localhost kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Nov 29 05:35:50 localhost kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 77321 usecs
Nov 29 05:35:50 localhost kernel: PCI: CLS 0 bytes, default 64
Nov 29 05:35:50 localhost kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Nov 29 05:35:50 localhost kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Nov 29 05:35:50 localhost kernel: ACPI: bus type thunderbolt registered
Nov 29 05:35:50 localhost kernel: Trying to unpack rootfs image as initramfs...
Nov 29 05:35:50 localhost kernel: Initialise system trusted keyrings
Nov 29 05:35:50 localhost kernel: Key type blacklist registered
Nov 29 05:35:50 localhost kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Nov 29 05:35:50 localhost kernel: zbud: loaded
Nov 29 05:35:50 localhost kernel: integrity: Platform Keyring initialized
Nov 29 05:35:50 localhost kernel: integrity: Machine keyring initialized
Nov 29 05:35:50 localhost kernel: Freeing initrd memory: 85868K
Nov 29 05:35:50 localhost kernel: NET: Registered PF_ALG protocol family
Nov 29 05:35:50 localhost kernel: xor: automatically using best checksumming function   avx       
Nov 29 05:35:50 localhost kernel: Key type asymmetric registered
Nov 29 05:35:50 localhost kernel: Asymmetric key parser 'x509' registered
Nov 29 05:35:50 localhost kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Nov 29 05:35:50 localhost kernel: io scheduler mq-deadline registered
Nov 29 05:35:50 localhost kernel: io scheduler kyber registered
Nov 29 05:35:50 localhost kernel: io scheduler bfq registered
Nov 29 05:35:50 localhost kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Nov 29 05:35:50 localhost kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Nov 29 05:35:50 localhost kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Nov 29 05:35:50 localhost kernel: ACPI: button: Power Button [PWRF]
Nov 29 05:35:50 localhost kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Nov 29 05:35:50 localhost kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Nov 29 05:35:50 localhost kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Nov 29 05:35:50 localhost kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Nov 29 05:35:50 localhost kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Nov 29 05:35:50 localhost kernel: Non-volatile memory driver v1.3
Nov 29 05:35:50 localhost kernel: rdac: device handler registered
Nov 29 05:35:50 localhost kernel: hp_sw: device handler registered
Nov 29 05:35:50 localhost kernel: emc: device handler registered
Nov 29 05:35:50 localhost kernel: alua: device handler registered
Nov 29 05:35:50 localhost kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Nov 29 05:35:50 localhost kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Nov 29 05:35:50 localhost kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Nov 29 05:35:50 localhost kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Nov 29 05:35:50 localhost kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Nov 29 05:35:50 localhost kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Nov 29 05:35:50 localhost kernel: usb usb1: Product: UHCI Host Controller
Nov 29 05:35:50 localhost kernel: usb usb1: Manufacturer: Linux 5.14.0-642.el9.x86_64 uhci_hcd
Nov 29 05:35:50 localhost kernel: usb usb1: SerialNumber: 0000:00:01.2
Nov 29 05:35:50 localhost kernel: hub 1-0:1.0: USB hub found
Nov 29 05:35:50 localhost kernel: hub 1-0:1.0: 2 ports detected
Nov 29 05:35:50 localhost kernel: usbcore: registered new interface driver usbserial_generic
Nov 29 05:35:50 localhost kernel: usbserial: USB Serial support registered for generic
Nov 29 05:35:50 localhost kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Nov 29 05:35:50 localhost kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Nov 29 05:35:50 localhost kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Nov 29 05:35:50 localhost kernel: mousedev: PS/2 mouse device common for all mice
Nov 29 05:35:50 localhost kernel: rtc_cmos 00:04: RTC can wake from S4
Nov 29 05:35:50 localhost kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Nov 29 05:35:50 localhost kernel: rtc_cmos 00:04: registered as rtc0
Nov 29 05:35:50 localhost kernel: rtc_cmos 00:04: setting system clock to 2025-11-29T05:35:49 UTC (1764394549)
Nov 29 05:35:50 localhost kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Nov 29 05:35:50 localhost kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Nov 29 05:35:50 localhost kernel: hid: raw HID events driver (C) Jiri Kosina
Nov 29 05:35:50 localhost kernel: usbcore: registered new interface driver usbhid
Nov 29 05:35:50 localhost kernel: usbhid: USB HID core driver
Nov 29 05:35:50 localhost kernel: drop_monitor: Initializing network drop monitor service
Nov 29 05:35:50 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Nov 29 05:35:50 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Nov 29 05:35:50 localhost kernel: Initializing XFRM netlink socket
Nov 29 05:35:50 localhost kernel: NET: Registered PF_INET6 protocol family
Nov 29 05:35:50 localhost kernel: Segment Routing with IPv6
Nov 29 05:35:50 localhost kernel: NET: Registered PF_PACKET protocol family
Nov 29 05:35:50 localhost kernel: mpls_gso: MPLS GSO support
Nov 29 05:35:50 localhost kernel: IPI shorthand broadcast: enabled
Nov 29 05:35:50 localhost kernel: AVX2 version of gcm_enc/dec engaged.
Nov 29 05:35:50 localhost kernel: AES CTR mode by8 optimization enabled
Nov 29 05:35:50 localhost kernel: sched_clock: Marking stable (1275011990, 133424950)->(1594138089, -185701149)
Nov 29 05:35:50 localhost kernel: registered taskstats version 1
Nov 29 05:35:50 localhost kernel: Loading compiled-in X.509 certificates
Nov 29 05:35:50 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 8ec4bd273f582f9a9b9a494ae677ca1f1488f19e'
Nov 29 05:35:50 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Nov 29 05:35:50 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Nov 29 05:35:50 localhost kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Nov 29 05:35:50 localhost kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Nov 29 05:35:50 localhost kernel: Demotion targets for Node 0: null
Nov 29 05:35:50 localhost kernel: page_owner is disabled
Nov 29 05:35:50 localhost kernel: Key type .fscrypt registered
Nov 29 05:35:50 localhost kernel: Key type fscrypt-provisioning registered
Nov 29 05:35:50 localhost kernel: Key type big_key registered
Nov 29 05:35:50 localhost kernel: Key type encrypted registered
Nov 29 05:35:50 localhost kernel: ima: No TPM chip found, activating TPM-bypass!
Nov 29 05:35:50 localhost kernel: Loading compiled-in module X.509 certificates
Nov 29 05:35:50 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 8ec4bd273f582f9a9b9a494ae677ca1f1488f19e'
Nov 29 05:35:50 localhost kernel: ima: Allocated hash algorithm: sha256
Nov 29 05:35:50 localhost kernel: ima: No architecture policies found
Nov 29 05:35:50 localhost kernel: evm: Initialising EVM extended attributes:
Nov 29 05:35:50 localhost kernel: evm: security.selinux
Nov 29 05:35:50 localhost kernel: evm: security.SMACK64 (disabled)
Nov 29 05:35:50 localhost kernel: evm: security.SMACK64EXEC (disabled)
Nov 29 05:35:50 localhost kernel: evm: security.SMACK64TRANSMUTE (disabled)
Nov 29 05:35:50 localhost kernel: evm: security.SMACK64MMAP (disabled)
Nov 29 05:35:50 localhost kernel: evm: security.apparmor (disabled)
Nov 29 05:35:50 localhost kernel: evm: security.ima
Nov 29 05:35:50 localhost kernel: evm: security.capability
Nov 29 05:35:50 localhost kernel: evm: HMAC attrs: 0x1
Nov 29 05:35:50 localhost kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Nov 29 05:35:50 localhost kernel: Running certificate verification RSA selftest
Nov 29 05:35:50 localhost kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Nov 29 05:35:50 localhost kernel: Running certificate verification ECDSA selftest
Nov 29 05:35:50 localhost kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Nov 29 05:35:50 localhost kernel: clk: Disabling unused clocks
Nov 29 05:35:50 localhost kernel: Freeing unused decrypted memory: 2028K
Nov 29 05:35:50 localhost kernel: Freeing unused kernel image (initmem) memory: 4192K
Nov 29 05:35:50 localhost kernel: Write protecting the kernel read-only data: 30720k
Nov 29 05:35:50 localhost kernel: Freeing unused kernel image (rodata/data gap) memory: 436K
Nov 29 05:35:50 localhost kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Nov 29 05:35:50 localhost kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Nov 29 05:35:50 localhost kernel: usb 1-1: Product: QEMU USB Tablet
Nov 29 05:35:50 localhost kernel: usb 1-1: Manufacturer: QEMU
Nov 29 05:35:50 localhost kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Nov 29 05:35:50 localhost kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Nov 29 05:35:50 localhost kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Nov 29 05:35:50 localhost kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Nov 29 05:35:50 localhost kernel: Run /init as init process
Nov 29 05:35:50 localhost kernel:   with arguments:
Nov 29 05:35:50 localhost kernel:     /init
Nov 29 05:35:50 localhost kernel:   with environment:
Nov 29 05:35:50 localhost kernel:     HOME=/
Nov 29 05:35:50 localhost kernel:     TERM=linux
Nov 29 05:35:50 localhost kernel:     BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64
Nov 29 05:35:50 localhost systemd[1]: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Nov 29 05:35:50 localhost systemd[1]: Detected virtualization kvm.
Nov 29 05:35:50 localhost systemd[1]: Detected architecture x86-64.
Nov 29 05:35:50 localhost systemd[1]: Running in initrd.
Nov 29 05:35:50 localhost systemd[1]: No hostname configured, using default hostname.
Nov 29 05:35:50 localhost systemd[1]: Hostname set to <localhost>.
Nov 29 05:35:50 localhost systemd[1]: Initializing machine ID from VM UUID.
Nov 29 05:35:50 localhost systemd[1]: Queued start job for default target Initrd Default Target.
Nov 29 05:35:50 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Nov 29 05:35:50 localhost systemd[1]: Reached target Local Encrypted Volumes.
Nov 29 05:35:50 localhost systemd[1]: Reached target Initrd /usr File System.
Nov 29 05:35:50 localhost systemd[1]: Reached target Local File Systems.
Nov 29 05:35:50 localhost systemd[1]: Reached target Path Units.
Nov 29 05:35:50 localhost systemd[1]: Reached target Slice Units.
Nov 29 05:35:50 localhost systemd[1]: Reached target Swaps.
Nov 29 05:35:50 localhost systemd[1]: Reached target Timer Units.
Nov 29 05:35:50 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Nov 29 05:35:50 localhost systemd[1]: Listening on Journal Socket (/dev/log).
Nov 29 05:35:50 localhost systemd[1]: Listening on Journal Socket.
Nov 29 05:35:50 localhost systemd[1]: Listening on udev Control Socket.
Nov 29 05:35:50 localhost systemd[1]: Listening on udev Kernel Socket.
Nov 29 05:35:50 localhost systemd[1]: Reached target Socket Units.
Nov 29 05:35:50 localhost systemd[1]: Starting Create List of Static Device Nodes...
Nov 29 05:35:50 localhost systemd[1]: Starting Journal Service...
Nov 29 05:35:50 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Nov 29 05:35:50 localhost systemd[1]: Starting Apply Kernel Variables...
Nov 29 05:35:50 localhost systemd[1]: Starting Create System Users...
Nov 29 05:35:50 localhost systemd[1]: Starting Setup Virtual Console...
Nov 29 05:35:50 localhost systemd[1]: Finished Create List of Static Device Nodes.
Nov 29 05:35:50 localhost systemd[1]: Finished Apply Kernel Variables.
Nov 29 05:35:50 localhost systemd[1]: Finished Create System Users.
Nov 29 05:35:50 localhost systemd-journald[305]: Journal started
Nov 29 05:35:50 localhost systemd-journald[305]: Runtime Journal (/run/log/journal/c87c7517e5694e428023b11f25bc4e0c) is 8.0M, max 153.6M, 145.6M free.
Nov 29 05:35:50 localhost systemd-sysusers[310]: Creating group 'users' with GID 100.
Nov 29 05:35:50 localhost systemd-sysusers[310]: Creating group 'dbus' with GID 81.
Nov 29 05:35:50 localhost systemd-sysusers[310]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Nov 29 05:35:50 localhost systemd[1]: Started Journal Service.
Nov 29 05:35:50 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Nov 29 05:35:50 localhost systemd[1]: Starting Create Volatile Files and Directories...
Nov 29 05:35:50 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Nov 29 05:35:50 localhost systemd[1]: Finished Create Volatile Files and Directories.
Nov 29 05:35:50 localhost systemd[1]: Finished Setup Virtual Console.
Nov 29 05:35:50 localhost systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Nov 29 05:35:50 localhost systemd[1]: Starting dracut cmdline hook...
Nov 29 05:35:50 localhost dracut-cmdline[326]: dracut-9 dracut-057-102.git20250818.el9
Nov 29 05:35:50 localhost dracut-cmdline[326]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64 root=UUID=b277050f-8ace-464d-abb6-4c46d4c45253 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 29 05:35:50 localhost systemd[1]: Finished dracut cmdline hook.
Nov 29 05:35:50 localhost systemd[1]: Starting dracut pre-udev hook...
Nov 29 05:35:50 localhost kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Nov 29 05:35:50 localhost kernel: device-mapper: uevent: version 1.0.3
Nov 29 05:35:50 localhost kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Nov 29 05:35:50 localhost kernel: RPC: Registered named UNIX socket transport module.
Nov 29 05:35:50 localhost kernel: RPC: Registered udp transport module.
Nov 29 05:35:50 localhost kernel: RPC: Registered tcp transport module.
Nov 29 05:35:50 localhost kernel: RPC: Registered tcp-with-tls transport module.
Nov 29 05:35:50 localhost kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Nov 29 05:35:50 localhost rpc.statd[444]: Version 2.5.4 starting
Nov 29 05:35:50 localhost rpc.statd[444]: Initializing NSM state
Nov 29 05:35:50 localhost rpc.idmapd[449]: Setting log level to 0
Nov 29 05:35:50 localhost systemd[1]: Finished dracut pre-udev hook.
Nov 29 05:35:50 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Nov 29 05:35:50 localhost systemd-udevd[462]: Using default interface naming scheme 'rhel-9.0'.
Nov 29 05:35:50 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Nov 29 05:35:51 localhost systemd[1]: Starting dracut pre-trigger hook...
Nov 29 05:35:51 localhost systemd[1]: Finished dracut pre-trigger hook.
Nov 29 05:35:51 localhost systemd[1]: Starting Coldplug All udev Devices...
Nov 29 05:35:51 localhost systemd[1]: Created slice Slice /system/modprobe.
Nov 29 05:35:51 localhost systemd[1]: Starting Load Kernel Module configfs...
Nov 29 05:35:51 localhost systemd[1]: Finished Coldplug All udev Devices.
Nov 29 05:35:51 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Nov 29 05:35:51 localhost systemd[1]: Finished Load Kernel Module configfs.
Nov 29 05:35:51 localhost systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Nov 29 05:35:51 localhost systemd[1]: Reached target Network.
Nov 29 05:35:51 localhost systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Nov 29 05:35:51 localhost systemd[1]: Starting dracut initqueue hook...
Nov 29 05:35:51 localhost kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Nov 29 05:35:51 localhost kernel: libata version 3.00 loaded.
Nov 29 05:35:51 localhost kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Nov 29 05:35:51 localhost kernel:  vda: vda1
Nov 29 05:35:51 localhost systemd-udevd[496]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 05:35:51 localhost kernel: ata_piix 0000:00:01.1: version 2.13
Nov 29 05:35:51 localhost kernel: scsi host0: ata_piix
Nov 29 05:35:51 localhost kernel: scsi host1: ata_piix
Nov 29 05:35:51 localhost kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Nov 29 05:35:51 localhost kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Nov 29 05:35:51 localhost systemd[1]: Found device /dev/disk/by-uuid/b277050f-8ace-464d-abb6-4c46d4c45253.
Nov 29 05:35:51 localhost systemd[1]: Reached target Initrd Root Device.
Nov 29 05:35:51 localhost systemd[1]: Mounting Kernel Configuration File System...
Nov 29 05:35:51 localhost systemd[1]: Mounted Kernel Configuration File System.
Nov 29 05:35:51 localhost systemd[1]: Reached target System Initialization.
Nov 29 05:35:51 localhost kernel: ata1: found unknown device (class 0)
Nov 29 05:35:51 localhost kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Nov 29 05:35:51 localhost systemd[1]: Reached target Basic System.
Nov 29 05:35:51 localhost kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Nov 29 05:35:51 localhost kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Nov 29 05:35:51 localhost kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Nov 29 05:35:51 localhost kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Nov 29 05:35:51 localhost kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0
Nov 29 05:35:51 localhost systemd[1]: Finished dracut initqueue hook.
Nov 29 05:35:51 localhost systemd[1]: Reached target Preparation for Remote File Systems.
Nov 29 05:35:51 localhost systemd[1]: Reached target Remote Encrypted Volumes.
Nov 29 05:35:51 localhost systemd[1]: Reached target Remote File Systems.
Nov 29 05:35:51 localhost systemd[1]: Starting dracut pre-mount hook...
Nov 29 05:35:51 localhost systemd[1]: Finished dracut pre-mount hook.
Nov 29 05:35:51 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/b277050f-8ace-464d-abb6-4c46d4c45253...
Nov 29 05:35:51 localhost systemd-fsck[556]: /usr/sbin/fsck.xfs: XFS file system.
Nov 29 05:35:51 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/b277050f-8ace-464d-abb6-4c46d4c45253.
Nov 29 05:35:51 localhost systemd[1]: Mounting /sysroot...
Nov 29 05:35:52 localhost kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Nov 29 05:35:52 localhost kernel: XFS (vda1): Mounting V5 Filesystem b277050f-8ace-464d-abb6-4c46d4c45253
Nov 29 05:35:52 localhost kernel: XFS (vda1): Ending clean mount
Nov 29 05:35:52 localhost systemd[1]: Mounted /sysroot.
Nov 29 05:35:52 localhost systemd[1]: Reached target Initrd Root File System.
Nov 29 05:35:52 localhost systemd[1]: Starting Mountpoints Configured in the Real Root...
Nov 29 05:35:52 localhost systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Nov 29 05:35:52 localhost systemd[1]: Finished Mountpoints Configured in the Real Root.
Nov 29 05:35:52 localhost systemd[1]: Reached target Initrd File Systems.
Nov 29 05:35:52 localhost systemd[1]: Reached target Initrd Default Target.
Nov 29 05:35:52 localhost systemd[1]: Starting dracut mount hook...
Nov 29 05:35:52 localhost systemd[1]: Finished dracut mount hook.
Nov 29 05:35:52 localhost systemd[1]: Starting dracut pre-pivot and cleanup hook...
Nov 29 05:35:52 localhost rpc.idmapd[449]: exiting on signal 15
Nov 29 05:35:52 localhost systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Nov 29 05:35:52 localhost systemd[1]: Finished dracut pre-pivot and cleanup hook.
Nov 29 05:35:52 localhost systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Nov 29 05:35:52 localhost systemd[1]: Stopped target Network.
Nov 29 05:35:52 localhost systemd[1]: Stopped target Remote Encrypted Volumes.
Nov 29 05:35:52 localhost systemd[1]: Stopped target Timer Units.
Nov 29 05:35:52 localhost systemd[1]: dbus.socket: Deactivated successfully.
Nov 29 05:35:52 localhost systemd[1]: Closed D-Bus System Message Bus Socket.
Nov 29 05:35:52 localhost systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Nov 29 05:35:52 localhost systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Nov 29 05:35:52 localhost systemd[1]: Stopped target Initrd Default Target.
Nov 29 05:35:52 localhost systemd[1]: Stopped target Basic System.
Nov 29 05:35:52 localhost systemd[1]: Stopped target Initrd Root Device.
Nov 29 05:35:52 localhost systemd[1]: Stopped target Initrd /usr File System.
Nov 29 05:35:52 localhost systemd[1]: Stopped target Path Units.
Nov 29 05:35:52 localhost systemd[1]: Stopped target Remote File Systems.
Nov 29 05:35:52 localhost systemd[1]: Stopped target Preparation for Remote File Systems.
Nov 29 05:35:52 localhost systemd[1]: Stopped target Slice Units.
Nov 29 05:35:52 localhost systemd[1]: Stopped target Socket Units.
Nov 29 05:35:52 localhost systemd[1]: Stopped target System Initialization.
Nov 29 05:35:52 localhost systemd[1]: Stopped target Local File Systems.
Nov 29 05:35:52 localhost systemd[1]: Stopped target Swaps.
Nov 29 05:35:52 localhost systemd[1]: dracut-mount.service: Deactivated successfully.
Nov 29 05:35:52 localhost systemd[1]: Stopped dracut mount hook.
Nov 29 05:35:52 localhost systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Nov 29 05:35:52 localhost systemd[1]: Stopped dracut pre-mount hook.
Nov 29 05:35:52 localhost systemd[1]: Stopped target Local Encrypted Volumes.
Nov 29 05:35:52 localhost systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Nov 29 05:35:52 localhost systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Nov 29 05:35:52 localhost systemd[1]: dracut-initqueue.service: Deactivated successfully.
Nov 29 05:35:52 localhost systemd[1]: Stopped dracut initqueue hook.
Nov 29 05:35:52 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully.
Nov 29 05:35:52 localhost systemd[1]: Stopped Apply Kernel Variables.
Nov 29 05:35:52 localhost systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Nov 29 05:35:52 localhost systemd[1]: Stopped Create Volatile Files and Directories.
Nov 29 05:35:52 localhost systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Nov 29 05:35:52 localhost systemd[1]: Stopped Coldplug All udev Devices.
Nov 29 05:35:52 localhost systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Nov 29 05:35:52 localhost systemd[1]: Stopped dracut pre-trigger hook.
Nov 29 05:35:52 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Nov 29 05:35:52 localhost systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Nov 29 05:35:52 localhost systemd[1]: Stopped Setup Virtual Console.
Nov 29 05:35:52 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Nov 29 05:35:52 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Nov 29 05:35:52 localhost systemd[1]: initrd-cleanup.service: Deactivated successfully.
Nov 29 05:35:52 localhost systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Nov 29 05:35:52 localhost systemd[1]: systemd-udevd.service: Deactivated successfully.
Nov 29 05:35:52 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Nov 29 05:35:52 localhost systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Nov 29 05:35:52 localhost systemd[1]: Closed udev Control Socket.
Nov 29 05:35:52 localhost systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Nov 29 05:35:52 localhost systemd[1]: Closed udev Kernel Socket.
Nov 29 05:35:52 localhost systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Nov 29 05:35:52 localhost systemd[1]: Stopped dracut pre-udev hook.
Nov 29 05:35:52 localhost systemd[1]: dracut-cmdline.service: Deactivated successfully.
Nov 29 05:35:52 localhost systemd[1]: Stopped dracut cmdline hook.
Nov 29 05:35:52 localhost systemd[1]: Starting Cleanup udev Database...
Nov 29 05:35:52 localhost systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Nov 29 05:35:52 localhost systemd[1]: Stopped Create Static Device Nodes in /dev.
Nov 29 05:35:52 localhost systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Nov 29 05:35:52 localhost systemd[1]: Stopped Create List of Static Device Nodes.
Nov 29 05:35:52 localhost systemd[1]: systemd-sysusers.service: Deactivated successfully.
Nov 29 05:35:52 localhost systemd[1]: Stopped Create System Users.
Nov 29 05:35:52 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Nov 29 05:35:52 localhost systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Nov 29 05:35:52 localhost systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Nov 29 05:35:52 localhost systemd[1]: Finished Cleanup udev Database.
Nov 29 05:35:52 localhost systemd[1]: Reached target Switch Root.
Nov 29 05:35:52 localhost systemd[1]: Starting Switch Root...
Nov 29 05:35:52 localhost systemd[1]: Switching root.
Nov 29 05:35:52 localhost systemd-journald[305]: Received SIGTERM from PID 1 (systemd).
Nov 29 05:35:52 localhost systemd-journald[305]: Journal stopped
Nov 29 05:35:53 localhost kernel: audit: type=1404 audit(1764394552.630:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Nov 29 05:35:53 localhost kernel: SELinux:  policy capability network_peer_controls=1
Nov 29 05:35:53 localhost kernel: SELinux:  policy capability open_perms=1
Nov 29 05:35:53 localhost kernel: SELinux:  policy capability extended_socket_class=1
Nov 29 05:35:53 localhost kernel: SELinux:  policy capability always_check_network=0
Nov 29 05:35:53 localhost kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 29 05:35:53 localhost kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 29 05:35:53 localhost kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 29 05:35:53 localhost kernel: audit: type=1403 audit(1764394552.777:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Nov 29 05:35:53 localhost systemd[1]: Successfully loaded SELinux policy in 153.934ms.
Nov 29 05:35:53 localhost systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 28.124ms.
Nov 29 05:35:53 localhost systemd[1]: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Nov 29 05:35:53 localhost systemd[1]: Detected virtualization kvm.
Nov 29 05:35:53 localhost systemd[1]: Detected architecture x86-64.
Nov 29 05:35:53 localhost systemd-rc-local-generator[642]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 05:35:53 localhost systemd[1]: initrd-switch-root.service: Deactivated successfully.
Nov 29 05:35:53 localhost systemd[1]: Stopped Switch Root.
Nov 29 05:35:53 localhost systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Nov 29 05:35:53 localhost systemd[1]: Created slice Slice /system/getty.
Nov 29 05:35:53 localhost systemd[1]: Created slice Slice /system/serial-getty.
Nov 29 05:35:53 localhost systemd[1]: Created slice Slice /system/sshd-keygen.
Nov 29 05:35:53 localhost systemd[1]: Created slice User and Session Slice.
Nov 29 05:35:53 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Nov 29 05:35:53 localhost systemd[1]: Started Forward Password Requests to Wall Directory Watch.
Nov 29 05:35:53 localhost systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point.
Nov 29 05:35:53 localhost systemd[1]: Reached target Local Encrypted Volumes.
Nov 29 05:35:53 localhost systemd[1]: Stopped target Switch Root.
Nov 29 05:35:53 localhost systemd[1]: Stopped target Initrd File Systems.
Nov 29 05:35:53 localhost systemd[1]: Stopped target Initrd Root File System.
Nov 29 05:35:53 localhost systemd[1]: Reached target Local Integrity Protected Volumes.
Nov 29 05:35:53 localhost systemd[1]: Reached target Path Units.
Nov 29 05:35:53 localhost systemd[1]: Reached target rpc_pipefs.target.
Nov 29 05:35:53 localhost systemd[1]: Reached target Slice Units.
Nov 29 05:35:53 localhost systemd[1]: Reached target Swaps.
Nov 29 05:35:53 localhost systemd[1]: Reached target Local Verity Protected Volumes.
Nov 29 05:35:53 localhost systemd[1]: Listening on RPCbind Server Activation Socket.
Nov 29 05:35:53 localhost systemd[1]: Reached target RPC Port Mapper.
Nov 29 05:35:53 localhost systemd[1]: Listening on Process Core Dump Socket.
Nov 29 05:35:53 localhost systemd[1]: Listening on initctl Compatibility Named Pipe.
Nov 29 05:35:53 localhost systemd[1]: Listening on udev Control Socket.
Nov 29 05:35:53 localhost systemd[1]: Listening on udev Kernel Socket.
Nov 29 05:35:53 localhost systemd[1]: Mounting Huge Pages File System...
Nov 29 05:35:53 localhost systemd[1]: Mounting POSIX Message Queue File System...
Nov 29 05:35:53 localhost systemd[1]: Mounting Kernel Debug File System...
Nov 29 05:35:53 localhost systemd[1]: Mounting Kernel Trace File System...
Nov 29 05:35:53 localhost systemd[1]: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Nov 29 05:35:53 localhost systemd[1]: Starting Create List of Static Device Nodes...
Nov 29 05:35:53 localhost systemd[1]: Starting Load Kernel Module configfs...
Nov 29 05:35:53 localhost systemd[1]: Starting Load Kernel Module drm...
Nov 29 05:35:53 localhost systemd[1]: Starting Load Kernel Module efi_pstore...
Nov 29 05:35:53 localhost systemd[1]: Starting Load Kernel Module fuse...
Nov 29 05:35:53 localhost systemd[1]: Starting Read and set NIS domainname from /etc/sysconfig/network...
Nov 29 05:35:53 localhost systemd[1]: systemd-fsck-root.service: Deactivated successfully.
Nov 29 05:35:53 localhost systemd[1]: Stopped File System Check on Root Device.
Nov 29 05:35:53 localhost systemd[1]: Stopped Journal Service.
Nov 29 05:35:53 localhost systemd[1]: Starting Journal Service...
Nov 29 05:35:53 localhost kernel: fuse: init (API version 7.37)
Nov 29 05:35:53 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Nov 29 05:35:53 localhost systemd[1]: Starting Generate network units from Kernel command line...
Nov 29 05:35:53 localhost systemd[1]: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Nov 29 05:35:53 localhost systemd[1]: Starting Remount Root and Kernel File Systems...
Nov 29 05:35:53 localhost systemd[1]: Repartition Root Disk was skipped because no trigger condition checks were met.
Nov 29 05:35:53 localhost systemd[1]: Starting Apply Kernel Variables...
Nov 29 05:35:53 localhost systemd[1]: Starting Coldplug All udev Devices...
Nov 29 05:35:53 localhost systemd-journald[683]: Journal started
Nov 29 05:35:53 localhost systemd-journald[683]: Runtime Journal (/run/log/journal/1f988c78c563e12389ab342aced42dbb) is 8.0M, max 153.6M, 145.6M free.
Nov 29 05:35:53 localhost systemd[1]: Queued start job for default target Multi-User System.
Nov 29 05:35:53 localhost systemd[1]: systemd-journald.service: Deactivated successfully.
Nov 29 05:35:53 localhost kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Nov 29 05:35:53 localhost systemd[1]: Started Journal Service.
Nov 29 05:35:53 localhost systemd[1]: Mounted Huge Pages File System.
Nov 29 05:35:53 localhost systemd[1]: Mounted POSIX Message Queue File System.
Nov 29 05:35:53 localhost systemd[1]: Mounted Kernel Debug File System.
Nov 29 05:35:53 localhost systemd[1]: Mounted Kernel Trace File System.
Nov 29 05:35:53 localhost systemd[1]: Finished Create List of Static Device Nodes.
Nov 29 05:35:53 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Nov 29 05:35:53 localhost systemd[1]: Finished Load Kernel Module configfs.
Nov 29 05:35:53 localhost kernel: ACPI: bus type drm_connector registered
Nov 29 05:35:53 localhost systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Nov 29 05:35:53 localhost systemd[1]: Finished Load Kernel Module efi_pstore.
Nov 29 05:35:53 localhost systemd[1]: modprobe@drm.service: Deactivated successfully.
Nov 29 05:35:53 localhost systemd[1]: Finished Load Kernel Module drm.
Nov 29 05:35:53 localhost systemd[1]: modprobe@fuse.service: Deactivated successfully.
Nov 29 05:35:53 localhost systemd[1]: Finished Load Kernel Module fuse.
Nov 29 05:35:53 localhost systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Nov 29 05:35:53 localhost systemd[1]: Finished Generate network units from Kernel command line.
Nov 29 05:35:53 localhost systemd[1]: Finished Remount Root and Kernel File Systems.
Nov 29 05:35:53 localhost systemd[1]: Finished Apply Kernel Variables.
Nov 29 05:35:53 localhost systemd[1]: Mounting FUSE Control File System...
Nov 29 05:35:53 localhost systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Nov 29 05:35:53 localhost systemd[1]: Starting Rebuild Hardware Database...
Nov 29 05:35:53 localhost systemd[1]: Starting Flush Journal to Persistent Storage...
Nov 29 05:35:53 localhost systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Nov 29 05:35:53 localhost systemd[1]: Starting Load/Save OS Random Seed...
Nov 29 05:35:53 localhost systemd[1]: Starting Create System Users...
Nov 29 05:35:53 localhost systemd[1]: Mounted FUSE Control File System.
Nov 29 05:35:53 localhost systemd-journald[683]: Runtime Journal (/run/log/journal/1f988c78c563e12389ab342aced42dbb) is 8.0M, max 153.6M, 145.6M free.
Nov 29 05:35:53 localhost systemd-journald[683]: Received client request to flush runtime journal.
Nov 29 05:35:53 localhost systemd[1]: Finished Flush Journal to Persistent Storage.
Nov 29 05:35:53 localhost systemd[1]: Finished Coldplug All udev Devices.
Nov 29 05:35:53 localhost systemd[1]: Finished Load/Save OS Random Seed.
Nov 29 05:35:53 localhost systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Nov 29 05:35:53 localhost systemd[1]: Finished Create System Users.
Nov 29 05:35:53 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Nov 29 05:35:53 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Nov 29 05:35:53 localhost systemd[1]: Reached target Preparation for Local File Systems.
Nov 29 05:35:53 localhost systemd[1]: Reached target Local File Systems.
Nov 29 05:35:53 localhost systemd[1]: Starting Rebuild Dynamic Linker Cache...
Nov 29 05:35:53 localhost systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Nov 29 05:35:53 localhost systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Nov 29 05:35:53 localhost systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Nov 29 05:35:53 localhost systemd[1]: Starting Automatic Boot Loader Update...
Nov 29 05:35:53 localhost systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Nov 29 05:35:53 localhost systemd[1]: Starting Create Volatile Files and Directories...
Nov 29 05:35:53 localhost bootctl[700]: Couldn't find EFI system partition, skipping.
Nov 29 05:35:53 localhost systemd[1]: Finished Automatic Boot Loader Update.
Nov 29 05:35:53 localhost systemd[1]: Finished Create Volatile Files and Directories.
Nov 29 05:35:53 localhost systemd[1]: Starting Security Auditing Service...
Nov 29 05:35:53 localhost systemd[1]: Starting RPC Bind...
Nov 29 05:35:53 localhost systemd[1]: Starting Rebuild Journal Catalog...
Nov 29 05:35:53 localhost systemd[1]: Finished Rebuild Dynamic Linker Cache.
Nov 29 05:35:53 localhost auditd[707]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Nov 29 05:35:53 localhost auditd[707]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Nov 29 05:35:53 localhost systemd[1]: Started RPC Bind.
Nov 29 05:35:53 localhost systemd[1]: Finished Rebuild Journal Catalog.
Nov 29 05:35:53 localhost augenrules[712]: /sbin/augenrules: No change
Nov 29 05:35:53 localhost augenrules[727]: No rules
Nov 29 05:35:53 localhost augenrules[727]: enabled 1
Nov 29 05:35:53 localhost augenrules[727]: failure 1
Nov 29 05:35:53 localhost augenrules[727]: pid 707
Nov 29 05:35:53 localhost augenrules[727]: rate_limit 0
Nov 29 05:35:53 localhost augenrules[727]: backlog_limit 8192
Nov 29 05:35:53 localhost augenrules[727]: lost 0
Nov 29 05:35:53 localhost augenrules[727]: backlog 0
Nov 29 05:35:53 localhost augenrules[727]: backlog_wait_time 60000
Nov 29 05:35:53 localhost augenrules[727]: backlog_wait_time_actual 0
Nov 29 05:35:53 localhost augenrules[727]: enabled 1
Nov 29 05:35:53 localhost augenrules[727]: failure 1
Nov 29 05:35:53 localhost augenrules[727]: pid 707
Nov 29 05:35:53 localhost augenrules[727]: rate_limit 0
Nov 29 05:35:53 localhost augenrules[727]: backlog_limit 8192
Nov 29 05:35:53 localhost augenrules[727]: lost 0
Nov 29 05:35:53 localhost augenrules[727]: backlog 0
Nov 29 05:35:53 localhost augenrules[727]: backlog_wait_time 60000
Nov 29 05:35:53 localhost augenrules[727]: backlog_wait_time_actual 0
Nov 29 05:35:53 localhost augenrules[727]: enabled 1
Nov 29 05:35:53 localhost augenrules[727]: failure 1
Nov 29 05:35:53 localhost augenrules[727]: pid 707
Nov 29 05:35:53 localhost augenrules[727]: rate_limit 0
Nov 29 05:35:53 localhost augenrules[727]: backlog_limit 8192
Nov 29 05:35:53 localhost augenrules[727]: lost 0
Nov 29 05:35:53 localhost augenrules[727]: backlog 1
Nov 29 05:35:53 localhost augenrules[727]: backlog_wait_time 60000
Nov 29 05:35:53 localhost augenrules[727]: backlog_wait_time_actual 0
Nov 29 05:35:53 localhost systemd[1]: Started Security Auditing Service.
Nov 29 05:35:53 localhost systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Nov 29 05:35:53 localhost systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Nov 29 05:35:54 localhost systemd[1]: Finished Rebuild Hardware Database.
Nov 29 05:35:54 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Nov 29 05:35:54 localhost systemd[1]: Starting Update is Completed...
Nov 29 05:35:54 localhost systemd[1]: Finished Update is Completed.
Nov 29 05:35:54 localhost systemd-udevd[735]: Using default interface naming scheme 'rhel-9.0'.
Nov 29 05:35:54 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Nov 29 05:35:54 localhost systemd[1]: Reached target System Initialization.
Nov 29 05:35:54 localhost systemd[1]: Started dnf makecache --timer.
Nov 29 05:35:54 localhost systemd[1]: Started Daily rotation of log files.
Nov 29 05:35:54 localhost systemd[1]: Started Daily Cleanup of Temporary Directories.
Nov 29 05:35:54 localhost systemd[1]: Reached target Timer Units.
Nov 29 05:35:54 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Nov 29 05:35:54 localhost systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Nov 29 05:35:54 localhost systemd[1]: Reached target Socket Units.
Nov 29 05:35:54 localhost systemd[1]: Starting D-Bus System Message Bus...
Nov 29 05:35:54 localhost systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Nov 29 05:35:54 localhost systemd[1]: Starting Load Kernel Module configfs...
Nov 29 05:35:54 localhost systemd-udevd[752]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 05:35:54 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Nov 29 05:35:54 localhost systemd[1]: Finished Load Kernel Module configfs.
Nov 29 05:35:54 localhost systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Nov 29 05:35:54 localhost systemd[1]: Started D-Bus System Message Bus.
Nov 29 05:35:54 localhost systemd[1]: Reached target Basic System.
Nov 29 05:35:54 localhost dbus-broker-lau[771]: Ready
Nov 29 05:35:54 localhost systemd[1]: Starting NTP client/server...
Nov 29 05:35:54 localhost kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Nov 29 05:35:54 localhost systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Nov 29 05:35:54 localhost kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Nov 29 05:35:54 localhost kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Nov 29 05:35:54 localhost kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Nov 29 05:35:54 localhost systemd[1]: Starting Restore /run/initramfs on shutdown...
Nov 29 05:35:54 localhost systemd[1]: Starting IPv4 firewall with iptables...
Nov 29 05:35:54 localhost systemd[1]: Started irqbalance daemon.
Nov 29 05:35:54 localhost systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Nov 29 05:35:54 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 29 05:35:54 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 29 05:35:54 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 29 05:35:54 localhost systemd[1]: Reached target sshd-keygen.target.
Nov 29 05:35:54 localhost systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Nov 29 05:35:54 localhost systemd[1]: Reached target User and Group Name Lookups.
Nov 29 05:35:55 localhost chronyd[800]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Nov 29 05:35:55 localhost chronyd[800]: Loaded 0 symmetric keys
Nov 29 05:35:55 localhost chronyd[800]: Using right/UTC timezone to obtain leap second data
Nov 29 05:35:55 localhost chronyd[800]: Loaded seccomp filter (level 2)
Nov 29 05:35:55 localhost systemd[1]: Starting User Login Management...
Nov 29 05:35:55 localhost systemd[1]: Started NTP client/server.
Nov 29 05:35:55 localhost systemd[1]: Finished Restore /run/initramfs on shutdown.
Nov 29 05:35:55 localhost kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Nov 29 05:35:55 localhost kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Nov 29 05:35:55 localhost kernel: kvm_amd: TSC scaling supported
Nov 29 05:35:55 localhost kernel: kvm_amd: Nested Virtualization enabled
Nov 29 05:35:55 localhost kernel: kvm_amd: Nested Paging enabled
Nov 29 05:35:55 localhost kernel: kvm_amd: LBR virtualization supported
Nov 29 05:35:55 localhost systemd-logind[797]: Watching system buttons on /dev/input/event0 (Power Button)
Nov 29 05:35:55 localhost systemd-logind[797]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Nov 29 05:35:55 localhost kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Nov 29 05:35:55 localhost kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Nov 29 05:35:55 localhost iptables.init[784]: iptables: Applying firewall rules: [  OK  ]
Nov 29 05:35:55 localhost systemd[1]: Finished IPv4 firewall with iptables.
Nov 29 05:35:55 localhost systemd-logind[797]: New seat seat0.
Nov 29 05:35:55 localhost kernel: Console: switching to colour dummy device 80x25
Nov 29 05:35:55 localhost kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Nov 29 05:35:55 localhost kernel: [drm] features: -context_init
Nov 29 05:35:55 localhost kernel: [drm] number of scanouts: 1
Nov 29 05:35:55 localhost kernel: [drm] number of cap sets: 0
Nov 29 05:35:55 localhost systemd[1]: Started User Login Management.
Nov 29 05:35:55 localhost kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Nov 29 05:35:55 localhost kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Nov 29 05:35:55 localhost kernel: Console: switching to colour frame buffer device 128x48
Nov 29 05:35:55 localhost kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Nov 29 05:35:55 localhost cloud-init[843]: Cloud-init v. 24.4-7.el9 running 'init-local' at Sat, 29 Nov 2025 05:35:55 +0000. Up 7.19 seconds.
Nov 29 05:35:55 localhost kernel: ISO 9660 Extensions: Microsoft Joliet Level 3
Nov 29 05:35:55 localhost kernel: ISO 9660 Extensions: RRIP_1991A
Nov 29 05:35:55 localhost systemd[1]: run-cloud\x2dinit-tmp-tmpea_o52zy.mount: Deactivated successfully.
Nov 29 05:35:55 localhost systemd[1]: Starting Hostname Service...
Nov 29 05:35:55 localhost systemd[1]: Started Hostname Service.
Nov 29 05:35:55 np0005539508.novalocal systemd-hostnamed[857]: Hostname set to <np0005539508.novalocal> (static)
Nov 29 05:35:55 np0005539508.novalocal systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Nov 29 05:35:55 np0005539508.novalocal systemd[1]: Reached target Preparation for Network.
Nov 29 05:35:55 np0005539508.novalocal systemd[1]: Starting Network Manager...
Nov 29 05:35:56 np0005539508.novalocal NetworkManager[861]: <info>  [1764394556.0116] NetworkManager (version 1.54.1-1.el9) is starting... (boot:b7b17a39-22f5-4f4f-9861-b1bcbadcfe77)
Nov 29 05:35:56 np0005539508.novalocal NetworkManager[861]: <info>  [1764394556.0122] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Nov 29 05:35:56 np0005539508.novalocal NetworkManager[861]: <info>  [1764394556.0196] manager[0x55da37017080]: monitoring kernel firmware directory '/lib/firmware'.
Nov 29 05:35:56 np0005539508.novalocal NetworkManager[861]: <info>  [1764394556.0232] hostname: hostname: using hostnamed
Nov 29 05:35:56 np0005539508.novalocal NetworkManager[861]: <info>  [1764394556.0232] hostname: static hostname changed from (none) to "np0005539508.novalocal"
Nov 29 05:35:56 np0005539508.novalocal NetworkManager[861]: <info>  [1764394556.0237] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Nov 29 05:35:56 np0005539508.novalocal NetworkManager[861]: <info>  [1764394556.0387] manager[0x55da37017080]: rfkill: Wi-Fi hardware radio set enabled
Nov 29 05:35:56 np0005539508.novalocal NetworkManager[861]: <info>  [1764394556.0389] manager[0x55da37017080]: rfkill: WWAN hardware radio set enabled
Nov 29 05:35:56 np0005539508.novalocal NetworkManager[861]: <info>  [1764394556.0464] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Nov 29 05:35:56 np0005539508.novalocal NetworkManager[861]: <info>  [1764394556.0465] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Nov 29 05:35:56 np0005539508.novalocal NetworkManager[861]: <info>  [1764394556.0466] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Nov 29 05:35:56 np0005539508.novalocal systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Nov 29 05:35:56 np0005539508.novalocal NetworkManager[861]: <info>  [1764394556.0467] manager: Networking is enabled by state file
Nov 29 05:35:56 np0005539508.novalocal NetworkManager[861]: <info>  [1764394556.0475] settings: Loaded settings plugin: keyfile (internal)
Nov 29 05:35:56 np0005539508.novalocal NetworkManager[861]: <info>  [1764394556.0503] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Nov 29 05:35:56 np0005539508.novalocal NetworkManager[861]: <info>  [1764394556.0548] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Nov 29 05:35:56 np0005539508.novalocal NetworkManager[861]: <info>  [1764394556.0573] dhcp: init: Using DHCP client 'internal'
Nov 29 05:35:56 np0005539508.novalocal NetworkManager[861]: <info>  [1764394556.0583] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Nov 29 05:35:56 np0005539508.novalocal NetworkManager[861]: <info>  [1764394556.0608] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 05:35:56 np0005539508.novalocal NetworkManager[861]: <info>  [1764394556.0619] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Nov 29 05:35:56 np0005539508.novalocal NetworkManager[861]: <info>  [1764394556.0631] device (lo): Activation: starting connection 'lo' (1e70ab37-1fe6-47fd-afad-f3ac90d7657d)
Nov 29 05:35:56 np0005539508.novalocal NetworkManager[861]: <info>  [1764394556.0649] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Nov 29 05:35:56 np0005539508.novalocal NetworkManager[861]: <info>  [1764394556.0653] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 05:35:56 np0005539508.novalocal NetworkManager[861]: <info>  [1764394556.0693] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Nov 29 05:35:56 np0005539508.novalocal NetworkManager[861]: <info>  [1764394556.0699] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Nov 29 05:35:56 np0005539508.novalocal NetworkManager[861]: <info>  [1764394556.0703] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Nov 29 05:35:56 np0005539508.novalocal NetworkManager[861]: <info>  [1764394556.0707] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Nov 29 05:35:56 np0005539508.novalocal NetworkManager[861]: <info>  [1764394556.0710] device (eth0): carrier: link connected
Nov 29 05:35:56 np0005539508.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 29 05:35:56 np0005539508.novalocal NetworkManager[861]: <info>  [1764394556.0717] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Nov 29 05:35:56 np0005539508.novalocal NetworkManager[861]: <info>  [1764394556.0728] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Nov 29 05:35:56 np0005539508.novalocal NetworkManager[861]: <info>  [1764394556.0739] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 29 05:35:56 np0005539508.novalocal systemd[1]: Started Network Manager.
Nov 29 05:35:56 np0005539508.novalocal NetworkManager[861]: <info>  [1764394556.0747] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 29 05:35:56 np0005539508.novalocal NetworkManager[861]: <info>  [1764394556.0749] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 05:35:56 np0005539508.novalocal NetworkManager[861]: <info>  [1764394556.0755] manager: NetworkManager state is now CONNECTING
Nov 29 05:35:56 np0005539508.novalocal NetworkManager[861]: <info>  [1764394556.0758] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 05:35:56 np0005539508.novalocal systemd[1]: Reached target Network.
Nov 29 05:35:56 np0005539508.novalocal NetworkManager[861]: <info>  [1764394556.0771] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 05:35:56 np0005539508.novalocal NetworkManager[861]: <info>  [1764394556.0778] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 29 05:35:56 np0005539508.novalocal systemd[1]: Starting Network Manager Wait Online...
Nov 29 05:35:56 np0005539508.novalocal NetworkManager[861]: <info>  [1764394556.0828] dhcp4 (eth0): state changed new lease, address=38.102.83.22
Nov 29 05:35:56 np0005539508.novalocal NetworkManager[861]: <info>  [1764394556.0838] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Nov 29 05:35:56 np0005539508.novalocal systemd[1]: Starting GSSAPI Proxy Daemon...
Nov 29 05:35:56 np0005539508.novalocal NetworkManager[861]: <info>  [1764394556.0865] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 05:35:56 np0005539508.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 29 05:35:56 np0005539508.novalocal NetworkManager[861]: <info>  [1764394556.0888] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Nov 29 05:35:56 np0005539508.novalocal NetworkManager[861]: <info>  [1764394556.0890] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Nov 29 05:35:56 np0005539508.novalocal NetworkManager[861]: <info>  [1764394556.0899] device (lo): Activation: successful, device activated.
Nov 29 05:35:56 np0005539508.novalocal NetworkManager[861]: <info>  [1764394556.0926] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 05:35:56 np0005539508.novalocal NetworkManager[861]: <info>  [1764394556.0929] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 05:35:56 np0005539508.novalocal NetworkManager[861]: <info>  [1764394556.0934] manager: NetworkManager state is now CONNECTED_SITE
Nov 29 05:35:56 np0005539508.novalocal NetworkManager[861]: <info>  [1764394556.0939] device (eth0): Activation: successful, device activated.
Nov 29 05:35:56 np0005539508.novalocal NetworkManager[861]: <info>  [1764394556.0945] manager: NetworkManager state is now CONNECTED_GLOBAL
Nov 29 05:35:56 np0005539508.novalocal NetworkManager[861]: <info>  [1764394556.0950] manager: startup complete
Nov 29 05:35:56 np0005539508.novalocal systemd[1]: Started GSSAPI Proxy Daemon.
Nov 29 05:35:56 np0005539508.novalocal systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Nov 29 05:35:56 np0005539508.novalocal systemd[1]: Reached target NFS client services.
Nov 29 05:35:56 np0005539508.novalocal systemd[1]: Reached target Preparation for Remote File Systems.
Nov 29 05:35:56 np0005539508.novalocal systemd[1]: Reached target Remote File Systems.
Nov 29 05:35:56 np0005539508.novalocal systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Nov 29 05:35:56 np0005539508.novalocal systemd[1]: Finished Network Manager Wait Online.
Nov 29 05:35:56 np0005539508.novalocal systemd[1]: Starting Cloud-init: Network Stage...
Nov 29 05:35:56 np0005539508.novalocal cloud-init[924]: Cloud-init v. 24.4-7.el9 running 'init' at Sat, 29 Nov 2025 05:35:56 +0000. Up 8.14 seconds.
Nov 29 05:35:56 np0005539508.novalocal cloud-init[924]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Nov 29 05:35:56 np0005539508.novalocal cloud-init[924]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Nov 29 05:35:56 np0005539508.novalocal cloud-init[924]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Nov 29 05:35:56 np0005539508.novalocal cloud-init[924]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Nov 29 05:35:56 np0005539508.novalocal cloud-init[924]: ci-info: |  eth0  | True |         38.102.83.22         | 255.255.255.0 | global | fa:16:3e:f2:9a:ed |
Nov 29 05:35:56 np0005539508.novalocal cloud-init[924]: ci-info: |  eth0  | True | fe80::f816:3eff:fef2:9aed/64 |       .       |  link  | fa:16:3e:f2:9a:ed |
Nov 29 05:35:56 np0005539508.novalocal cloud-init[924]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Nov 29 05:35:56 np0005539508.novalocal cloud-init[924]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Nov 29 05:35:56 np0005539508.novalocal cloud-init[924]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Nov 29 05:35:56 np0005539508.novalocal cloud-init[924]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Nov 29 05:35:56 np0005539508.novalocal cloud-init[924]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Nov 29 05:35:56 np0005539508.novalocal cloud-init[924]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Nov 29 05:35:56 np0005539508.novalocal cloud-init[924]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Nov 29 05:35:56 np0005539508.novalocal cloud-init[924]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Nov 29 05:35:56 np0005539508.novalocal cloud-init[924]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Nov 29 05:35:56 np0005539508.novalocal cloud-init[924]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Nov 29 05:35:56 np0005539508.novalocal cloud-init[924]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Nov 29 05:35:56 np0005539508.novalocal cloud-init[924]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Nov 29 05:35:56 np0005539508.novalocal cloud-init[924]: ci-info: +-------+-------------+---------+-----------+-------+
Nov 29 05:35:56 np0005539508.novalocal cloud-init[924]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Nov 29 05:35:56 np0005539508.novalocal cloud-init[924]: ci-info: +-------+-------------+---------+-----------+-------+
Nov 29 05:35:56 np0005539508.novalocal cloud-init[924]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Nov 29 05:35:56 np0005539508.novalocal cloud-init[924]: ci-info: |   3   |  multicast  |    ::   |    eth0   |   U   |
Nov 29 05:35:56 np0005539508.novalocal cloud-init[924]: ci-info: +-------+-------------+---------+-----------+-------+
Nov 29 05:35:57 np0005539508.novalocal useradd[990]: new group: name=cloud-user, GID=1001
Nov 29 05:35:57 np0005539508.novalocal useradd[990]: new user: name=cloud-user, UID=1001, GID=1001, home=/home/cloud-user, shell=/bin/bash, from=none
Nov 29 05:35:57 np0005539508.novalocal useradd[990]: add 'cloud-user' to group 'adm'
Nov 29 05:35:57 np0005539508.novalocal useradd[990]: add 'cloud-user' to group 'systemd-journal'
Nov 29 05:35:57 np0005539508.novalocal useradd[990]: add 'cloud-user' to shadow group 'adm'
Nov 29 05:35:57 np0005539508.novalocal useradd[990]: add 'cloud-user' to shadow group 'systemd-journal'
Nov 29 05:35:57 np0005539508.novalocal cloud-init[924]: Generating public/private rsa key pair.
Nov 29 05:35:57 np0005539508.novalocal cloud-init[924]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Nov 29 05:35:57 np0005539508.novalocal cloud-init[924]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Nov 29 05:35:57 np0005539508.novalocal cloud-init[924]: The key fingerprint is:
Nov 29 05:35:57 np0005539508.novalocal cloud-init[924]: SHA256:3CWYiCW/jSPEeU8I+Mvc3QgpD62OznLpLjReeStf2yo root@np0005539508.novalocal
Nov 29 05:35:57 np0005539508.novalocal cloud-init[924]: The key's randomart image is:
Nov 29 05:35:57 np0005539508.novalocal cloud-init[924]: +---[RSA 3072]----+
Nov 29 05:35:57 np0005539508.novalocal cloud-init[924]: |   .o .          |
Nov 29 05:35:57 np0005539508.novalocal cloud-init[924]: |  .. B o o       |
Nov 29 05:35:57 np0005539508.novalocal cloud-init[924]: |   .=.=.+ . .    |
Nov 29 05:35:57 np0005539508.novalocal cloud-init[924]: |   .+.+B . o     |
Nov 29 05:35:57 np0005539508.novalocal cloud-init[924]: |   ooB+oSo.      |
Nov 29 05:35:57 np0005539508.novalocal cloud-init[924]: | o o=oo.o .      |
Nov 29 05:35:57 np0005539508.novalocal cloud-init[924]: |o o+. ..         |
Nov 29 05:35:57 np0005539508.novalocal cloud-init[924]: |oo+..E. o        |
Nov 29 05:35:57 np0005539508.novalocal cloud-init[924]: | B= o..o..       |
Nov 29 05:35:57 np0005539508.novalocal cloud-init[924]: +----[SHA256]-----+
Nov 29 05:35:57 np0005539508.novalocal cloud-init[924]: Generating public/private ecdsa key pair.
Nov 29 05:35:57 np0005539508.novalocal cloud-init[924]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Nov 29 05:35:57 np0005539508.novalocal cloud-init[924]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Nov 29 05:35:57 np0005539508.novalocal cloud-init[924]: The key fingerprint is:
Nov 29 05:35:57 np0005539508.novalocal cloud-init[924]: SHA256:GO5/9HL6MYPP3c7Vj+Fz7AZaE4q18Kj85ROe60WDJGI root@np0005539508.novalocal
Nov 29 05:35:57 np0005539508.novalocal cloud-init[924]: The key's randomart image is:
Nov 29 05:35:57 np0005539508.novalocal cloud-init[924]: +---[ECDSA 256]---+
Nov 29 05:35:57 np0005539508.novalocal cloud-init[924]: |                 |
Nov 29 05:35:57 np0005539508.novalocal cloud-init[924]: |                 |
Nov 29 05:35:57 np0005539508.novalocal cloud-init[924]: |      . E . .    |
Nov 29 05:35:57 np0005539508.novalocal cloud-init[924]: |     . + ..o...  |
Nov 29 05:35:57 np0005539508.novalocal cloud-init[924]: |      o S  *.oo. |
Nov 29 05:35:57 np0005539508.novalocal cloud-init[924]: |     .    +.=.+..|
Nov 29 05:35:57 np0005539508.novalocal cloud-init[924]: |      .. o.o==o+o|
Nov 29 05:35:57 np0005539508.novalocal cloud-init[924]: |       .o o=BB.=*|
Nov 29 05:35:57 np0005539508.novalocal cloud-init[924]: |        .oo*Bo+**|
Nov 29 05:35:57 np0005539508.novalocal cloud-init[924]: +----[SHA256]-----+
Nov 29 05:35:57 np0005539508.novalocal cloud-init[924]: Generating public/private ed25519 key pair.
Nov 29 05:35:57 np0005539508.novalocal cloud-init[924]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Nov 29 05:35:57 np0005539508.novalocal cloud-init[924]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Nov 29 05:35:57 np0005539508.novalocal cloud-init[924]: The key fingerprint is:
Nov 29 05:35:57 np0005539508.novalocal cloud-init[924]: SHA256:4QuCVs1s8IYegrr0KSA4lnjz6YlWnvDPNcJwzZxpPfM root@np0005539508.novalocal
Nov 29 05:35:57 np0005539508.novalocal cloud-init[924]: The key's randomart image is:
Nov 29 05:35:57 np0005539508.novalocal cloud-init[924]: +--[ED25519 256]--+
Nov 29 05:35:57 np0005539508.novalocal cloud-init[924]: |    .            |
Nov 29 05:35:57 np0005539508.novalocal cloud-init[924]: | .   B           |
Nov 29 05:35:57 np0005539508.novalocal cloud-init[924]: |. . + B .        |
Nov 29 05:35:57 np0005539508.novalocal cloud-init[924]: |+ .= + = =       |
Nov 29 05:35:57 np0005539508.novalocal cloud-init[924]: |B+= + o S +      |
Nov 29 05:35:57 np0005539508.novalocal cloud-init[924]: |==ooo* o . +     |
Nov 29 05:35:57 np0005539508.novalocal cloud-init[924]: |.. Bo.o +   E    |
Nov 29 05:35:57 np0005539508.novalocal cloud-init[924]: |  oo+o o .       |
Nov 29 05:35:57 np0005539508.novalocal cloud-init[924]: | .. o.o          |
Nov 29 05:35:57 np0005539508.novalocal cloud-init[924]: +----[SHA256]-----+
Nov 29 05:35:57 np0005539508.novalocal systemd[1]: Finished Cloud-init: Network Stage.
Nov 29 05:35:57 np0005539508.novalocal systemd[1]: Reached target Cloud-config availability.
Nov 29 05:35:57 np0005539508.novalocal systemd[1]: Reached target Network is Online.
Nov 29 05:35:57 np0005539508.novalocal systemd[1]: Starting Cloud-init: Config Stage...
Nov 29 05:35:57 np0005539508.novalocal systemd[1]: Starting Crash recovery kernel arming...
Nov 29 05:35:57 np0005539508.novalocal systemd[1]: Starting Notify NFS peers of a restart...
Nov 29 05:35:57 np0005539508.novalocal systemd[1]: Starting System Logging Service...
Nov 29 05:35:57 np0005539508.novalocal sm-notify[1006]: Version 2.5.4 starting
Nov 29 05:35:57 np0005539508.novalocal systemd[1]: Starting OpenSSH server daemon...
Nov 29 05:35:57 np0005539508.novalocal systemd[1]: Starting Permit User Sessions...
Nov 29 05:35:57 np0005539508.novalocal systemd[1]: Started Notify NFS peers of a restart.
Nov 29 05:35:57 np0005539508.novalocal sshd[1008]: Server listening on 0.0.0.0 port 22.
Nov 29 05:35:57 np0005539508.novalocal sshd[1008]: Server listening on :: port 22.
Nov 29 05:35:57 np0005539508.novalocal systemd[1]: Started OpenSSH server daemon.
Nov 29 05:35:57 np0005539508.novalocal systemd[1]: Finished Permit User Sessions.
Nov 29 05:35:57 np0005539508.novalocal systemd[1]: Started Command Scheduler.
Nov 29 05:35:57 np0005539508.novalocal systemd[1]: Started Getty on tty1.
Nov 29 05:35:57 np0005539508.novalocal crond[1011]: (CRON) STARTUP (1.5.7)
Nov 29 05:35:57 np0005539508.novalocal crond[1011]: (CRON) INFO (Syslog will be used instead of sendmail.)
Nov 29 05:35:57 np0005539508.novalocal crond[1011]: (CRON) INFO (RANDOM_DELAY will be scaled with factor 41% if used.)
Nov 29 05:35:57 np0005539508.novalocal crond[1011]: (CRON) INFO (running with inotify support)
Nov 29 05:35:57 np0005539508.novalocal systemd[1]: Started Serial Getty on ttyS0.
Nov 29 05:35:57 np0005539508.novalocal systemd[1]: Reached target Login Prompts.
Nov 29 05:35:57 np0005539508.novalocal rsyslogd[1007]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1007" x-info="https://www.rsyslog.com"] start
Nov 29 05:35:57 np0005539508.novalocal rsyslogd[1007]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Nov 29 05:35:57 np0005539508.novalocal systemd[1]: Started System Logging Service.
Nov 29 05:35:57 np0005539508.novalocal systemd[1]: Reached target Multi-User System.
Nov 29 05:35:57 np0005539508.novalocal systemd[1]: Starting Record Runlevel Change in UTMP...
Nov 29 05:35:57 np0005539508.novalocal rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 05:35:57 np0005539508.novalocal systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Nov 29 05:35:57 np0005539508.novalocal systemd[1]: Finished Record Runlevel Change in UTMP.
Nov 29 05:35:58 np0005539508.novalocal kdumpctl[1016]: kdump: No kdump initial ramdisk found.
Nov 29 05:35:58 np0005539508.novalocal kdumpctl[1016]: kdump: Rebuilding /boot/initramfs-5.14.0-642.el9.x86_64kdump.img
Nov 29 05:35:58 np0005539508.novalocal sshd-session[1071]: Unable to negotiate with 38.102.83.114 port 36812: no matching host key type found. Their offer: ssh-ed25519,ssh-ed25519-cert-v01@openssh.com [preauth]
Nov 29 05:35:58 np0005539508.novalocal sshd-session[1083]: Unable to negotiate with 38.102.83.114 port 36832: no matching host key type found. Their offer: ecdsa-sha2-nistp384,ecdsa-sha2-nistp384-cert-v01@openssh.com [preauth]
Nov 29 05:35:58 np0005539508.novalocal sshd-session[1103]: Unable to negotiate with 38.102.83.114 port 36836: no matching host key type found. Their offer: ecdsa-sha2-nistp521,ecdsa-sha2-nistp521-cert-v01@openssh.com [preauth]
Nov 29 05:35:58 np0005539508.novalocal cloud-init[1114]: Cloud-init v. 24.4-7.el9 running 'modules:config' at Sat, 29 Nov 2025 05:35:58 +0000. Up 9.86 seconds.
Nov 29 05:35:58 np0005539508.novalocal sshd-session[1064]: Connection closed by 38.102.83.114 port 36798 [preauth]
Nov 29 05:35:58 np0005539508.novalocal sshd-session[1075]: Connection closed by 38.102.83.114 port 36816 [preauth]
Nov 29 05:35:58 np0005539508.novalocal sshd-session[1125]: Connection reset by 38.102.83.114 port 36846 [preauth]
Nov 29 05:35:58 np0005539508.novalocal sshd-session[1145]: Unable to negotiate with 38.102.83.114 port 36862: no matching host key type found. Their offer: ssh-rsa,ssh-rsa-cert-v01@openssh.com [preauth]
Nov 29 05:35:58 np0005539508.novalocal systemd[1]: Finished Cloud-init: Config Stage.
Nov 29 05:35:58 np0005539508.novalocal sshd-session[1152]: Unable to negotiate with 38.102.83.114 port 36874: no matching host key type found. Their offer: ssh-dss,ssh-dss-cert-v01@openssh.com [preauth]
Nov 29 05:35:58 np0005539508.novalocal systemd[1]: Starting Cloud-init: Final Stage...
Nov 29 05:35:58 np0005539508.novalocal sshd-session[1113]: Connection closed by 38.102.83.114 port 36842 [preauth]
Nov 29 05:35:58 np0005539508.novalocal dracut[1285]: dracut-057-102.git20250818.el9
Nov 29 05:35:58 np0005539508.novalocal cloud-init[1303]: Cloud-init v. 24.4-7.el9 running 'modules:final' at Sat, 29 Nov 2025 05:35:58 +0000. Up 10.28 seconds.
Nov 29 05:35:58 np0005539508.novalocal cloud-init[1305]: #############################################################
Nov 29 05:35:58 np0005539508.novalocal cloud-init[1306]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Nov 29 05:35:58 np0005539508.novalocal cloud-init[1308]: 256 SHA256:GO5/9HL6MYPP3c7Vj+Fz7AZaE4q18Kj85ROe60WDJGI root@np0005539508.novalocal (ECDSA)
Nov 29 05:35:58 np0005539508.novalocal cloud-init[1312]: 256 SHA256:4QuCVs1s8IYegrr0KSA4lnjz6YlWnvDPNcJwzZxpPfM root@np0005539508.novalocal (ED25519)
Nov 29 05:35:58 np0005539508.novalocal cloud-init[1317]: 3072 SHA256:3CWYiCW/jSPEeU8I+Mvc3QgpD62OznLpLjReeStf2yo root@np0005539508.novalocal (RSA)
Nov 29 05:35:58 np0005539508.novalocal cloud-init[1322]: -----END SSH HOST KEY FINGERPRINTS-----
Nov 29 05:35:58 np0005539508.novalocal cloud-init[1323]: #############################################################
Nov 29 05:35:58 np0005539508.novalocal dracut[1287]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/b277050f-8ace-464d-abb6-4c46d4c45253 /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-642.el9.x86_64kdump.img 5.14.0-642.el9.x86_64
Nov 29 05:35:58 np0005539508.novalocal cloud-init[1303]: Cloud-init v. 24.4-7.el9 finished at Sat, 29 Nov 2025 05:35:58 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 10.47 seconds
Nov 29 05:35:58 np0005539508.novalocal systemd[1]: Finished Cloud-init: Final Stage.
Nov 29 05:35:58 np0005539508.novalocal systemd[1]: Reached target Cloud-init target.
Nov 29 05:35:59 np0005539508.novalocal dracut[1287]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Nov 29 05:35:59 np0005539508.novalocal dracut[1287]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Nov 29 05:35:59 np0005539508.novalocal dracut[1287]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Nov 29 05:35:59 np0005539508.novalocal dracut[1287]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Nov 29 05:35:59 np0005539508.novalocal dracut[1287]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Nov 29 05:35:59 np0005539508.novalocal dracut[1287]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Nov 29 05:35:59 np0005539508.novalocal dracut[1287]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Nov 29 05:35:59 np0005539508.novalocal dracut[1287]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Nov 29 05:35:59 np0005539508.novalocal dracut[1287]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Nov 29 05:35:59 np0005539508.novalocal dracut[1287]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Nov 29 05:35:59 np0005539508.novalocal dracut[1287]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Nov 29 05:35:59 np0005539508.novalocal dracut[1287]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Nov 29 05:35:59 np0005539508.novalocal dracut[1287]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Nov 29 05:35:59 np0005539508.novalocal dracut[1287]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Nov 29 05:35:59 np0005539508.novalocal dracut[1287]: Module 'ifcfg' will not be installed, because it's in the list to be omitted!
Nov 29 05:35:59 np0005539508.novalocal dracut[1287]: Module 'plymouth' will not be installed, because it's in the list to be omitted!
Nov 29 05:35:59 np0005539508.novalocal dracut[1287]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Nov 29 05:35:59 np0005539508.novalocal dracut[1287]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Nov 29 05:35:59 np0005539508.novalocal dracut[1287]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Nov 29 05:35:59 np0005539508.novalocal dracut[1287]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Nov 29 05:35:59 np0005539508.novalocal dracut[1287]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Nov 29 05:35:59 np0005539508.novalocal dracut[1287]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Nov 29 05:35:59 np0005539508.novalocal dracut[1287]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Nov 29 05:35:59 np0005539508.novalocal dracut[1287]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Nov 29 05:35:59 np0005539508.novalocal dracut[1287]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Nov 29 05:35:59 np0005539508.novalocal dracut[1287]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Nov 29 05:35:59 np0005539508.novalocal dracut[1287]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Nov 29 05:35:59 np0005539508.novalocal dracut[1287]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Nov 29 05:35:59 np0005539508.novalocal dracut[1287]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Nov 29 05:35:59 np0005539508.novalocal dracut[1287]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Nov 29 05:35:59 np0005539508.novalocal dracut[1287]: Module 'resume' will not be installed, because it's in the list to be omitted!
Nov 29 05:35:59 np0005539508.novalocal dracut[1287]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Nov 29 05:35:59 np0005539508.novalocal dracut[1287]: Module 'earlykdump' will not be installed, because it's in the list to be omitted!
Nov 29 05:36:00 np0005539508.novalocal dracut[1287]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Nov 29 05:36:00 np0005539508.novalocal dracut[1287]: memstrack is not available
Nov 29 05:36:00 np0005539508.novalocal dracut[1287]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Nov 29 05:36:00 np0005539508.novalocal dracut[1287]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Nov 29 05:36:00 np0005539508.novalocal dracut[1287]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Nov 29 05:36:00 np0005539508.novalocal dracut[1287]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Nov 29 05:36:00 np0005539508.novalocal dracut[1287]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Nov 29 05:36:00 np0005539508.novalocal dracut[1287]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Nov 29 05:36:00 np0005539508.novalocal dracut[1287]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Nov 29 05:36:00 np0005539508.novalocal dracut[1287]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Nov 29 05:36:00 np0005539508.novalocal dracut[1287]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Nov 29 05:36:00 np0005539508.novalocal dracut[1287]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Nov 29 05:36:00 np0005539508.novalocal dracut[1287]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Nov 29 05:36:00 np0005539508.novalocal dracut[1287]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Nov 29 05:36:00 np0005539508.novalocal dracut[1287]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Nov 29 05:36:00 np0005539508.novalocal dracut[1287]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Nov 29 05:36:00 np0005539508.novalocal dracut[1287]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Nov 29 05:36:00 np0005539508.novalocal dracut[1287]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Nov 29 05:36:00 np0005539508.novalocal dracut[1287]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Nov 29 05:36:00 np0005539508.novalocal dracut[1287]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Nov 29 05:36:00 np0005539508.novalocal dracut[1287]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Nov 29 05:36:00 np0005539508.novalocal dracut[1287]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Nov 29 05:36:00 np0005539508.novalocal dracut[1287]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Nov 29 05:36:00 np0005539508.novalocal dracut[1287]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Nov 29 05:36:00 np0005539508.novalocal dracut[1287]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Nov 29 05:36:00 np0005539508.novalocal dracut[1287]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Nov 29 05:36:00 np0005539508.novalocal dracut[1287]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Nov 29 05:36:00 np0005539508.novalocal dracut[1287]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Nov 29 05:36:00 np0005539508.novalocal dracut[1287]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Nov 29 05:36:00 np0005539508.novalocal dracut[1287]: memstrack is not available
Nov 29 05:36:00 np0005539508.novalocal dracut[1287]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Nov 29 05:36:00 np0005539508.novalocal dracut[1287]: *** Including module: systemd ***
Nov 29 05:36:00 np0005539508.novalocal dracut[1287]: *** Including module: fips ***
Nov 29 05:36:01 np0005539508.novalocal dracut[1287]: *** Including module: systemd-initrd ***
Nov 29 05:36:01 np0005539508.novalocal dracut[1287]: *** Including module: i18n ***
Nov 29 05:36:01 np0005539508.novalocal chronyd[800]: Selected source 162.159.200.123 (2.centos.pool.ntp.org)
Nov 29 05:36:01 np0005539508.novalocal chronyd[800]: System clock wrong by 1.496457 seconds
Nov 29 05:36:02 np0005539508.novalocal chronyd[800]: System clock was stepped by 1.496457 seconds
Nov 29 05:36:02 np0005539508.novalocal chronyd[800]: System clock TAI offset set to 37 seconds
Nov 29 05:36:02 np0005539508.novalocal dracut[1287]: *** Including module: drm ***
Nov 29 05:36:03 np0005539508.novalocal dracut[1287]: *** Including module: prefixdevname ***
Nov 29 05:36:03 np0005539508.novalocal dracut[1287]: *** Including module: kernel-modules ***
Nov 29 05:36:03 np0005539508.novalocal kernel: block vda: the capability attribute has been deprecated.
Nov 29 05:36:03 np0005539508.novalocal dracut[1287]: *** Including module: kernel-modules-extra ***
Nov 29 05:36:03 np0005539508.novalocal dracut[1287]:   kernel-modules-extra: configuration source "/run/depmod.d" does not exist
Nov 29 05:36:03 np0005539508.novalocal dracut[1287]:   kernel-modules-extra: configuration source "/lib/depmod.d" does not exist
Nov 29 05:36:03 np0005539508.novalocal dracut[1287]:   kernel-modules-extra: parsing configuration file "/etc/depmod.d/dist.conf"
Nov 29 05:36:03 np0005539508.novalocal dracut[1287]:   kernel-modules-extra: /etc/depmod.d/dist.conf: added "updates extra built-in weak-updates" to the list of search directories
Nov 29 05:36:03 np0005539508.novalocal dracut[1287]: *** Including module: qemu ***
Nov 29 05:36:03 np0005539508.novalocal dracut[1287]: *** Including module: fstab-sys ***
Nov 29 05:36:03 np0005539508.novalocal dracut[1287]: *** Including module: rootfs-block ***
Nov 29 05:36:03 np0005539508.novalocal dracut[1287]: *** Including module: terminfo ***
Nov 29 05:36:03 np0005539508.novalocal dracut[1287]: *** Including module: udev-rules ***
Nov 29 05:36:04 np0005539508.novalocal dracut[1287]: Skipping udev rule: 91-permissions.rules
Nov 29 05:36:04 np0005539508.novalocal dracut[1287]: Skipping udev rule: 80-drivers-modprobe.rules
Nov 29 05:36:04 np0005539508.novalocal dracut[1287]: *** Including module: virtiofs ***
Nov 29 05:36:04 np0005539508.novalocal dracut[1287]: *** Including module: dracut-systemd ***
Nov 29 05:36:04 np0005539508.novalocal dracut[1287]: *** Including module: usrmount ***
Nov 29 05:36:04 np0005539508.novalocal dracut[1287]: *** Including module: base ***
Nov 29 05:36:04 np0005539508.novalocal dracut[1287]: *** Including module: fs-lib ***
Nov 29 05:36:04 np0005539508.novalocal dracut[1287]: *** Including module: kdumpbase ***
Nov 29 05:36:05 np0005539508.novalocal dracut[1287]: *** Including module: microcode_ctl-fw_dir_override ***
Nov 29 05:36:05 np0005539508.novalocal dracut[1287]:   microcode_ctl module: mangling fw_dir
Nov 29 05:36:05 np0005539508.novalocal dracut[1287]:     microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Nov 29 05:36:05 np0005539508.novalocal dracut[1287]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Nov 29 05:36:05 np0005539508.novalocal dracut[1287]:     microcode_ctl: configuration "intel" is ignored
Nov 29 05:36:05 np0005539508.novalocal dracut[1287]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Nov 29 05:36:05 np0005539508.novalocal dracut[1287]:     microcode_ctl: configuration "intel-06-2d-07" is ignored
Nov 29 05:36:05 np0005539508.novalocal dracut[1287]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Nov 29 05:36:05 np0005539508.novalocal dracut[1287]:     microcode_ctl: configuration "intel-06-4e-03" is ignored
Nov 29 05:36:05 np0005539508.novalocal dracut[1287]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Nov 29 05:36:05 np0005539508.novalocal dracut[1287]:     microcode_ctl: configuration "intel-06-4f-01" is ignored
Nov 29 05:36:05 np0005539508.novalocal dracut[1287]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Nov 29 05:36:05 np0005539508.novalocal dracut[1287]:     microcode_ctl: configuration "intel-06-55-04" is ignored
Nov 29 05:36:05 np0005539508.novalocal dracut[1287]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Nov 29 05:36:05 np0005539508.novalocal dracut[1287]:     microcode_ctl: configuration "intel-06-5e-03" is ignored
Nov 29 05:36:05 np0005539508.novalocal dracut[1287]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Nov 29 05:36:05 np0005539508.novalocal dracut[1287]:     microcode_ctl: configuration "intel-06-8c-01" is ignored
Nov 29 05:36:05 np0005539508.novalocal dracut[1287]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Nov 29 05:36:05 np0005539508.novalocal dracut[1287]:     microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Nov 29 05:36:05 np0005539508.novalocal dracut[1287]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Nov 29 05:36:05 np0005539508.novalocal dracut[1287]:     microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Nov 29 05:36:05 np0005539508.novalocal dracut[1287]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Nov 29 05:36:05 np0005539508.novalocal dracut[1287]:     microcode_ctl: configuration "intel-06-8f-08" is ignored
Nov 29 05:36:05 np0005539508.novalocal dracut[1287]:     microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Nov 29 05:36:05 np0005539508.novalocal dracut[1287]: *** Including module: openssl ***
Nov 29 05:36:05 np0005539508.novalocal dracut[1287]: *** Including module: shutdown ***
Nov 29 05:36:05 np0005539508.novalocal dracut[1287]: *** Including module: squash ***
Nov 29 05:36:05 np0005539508.novalocal dracut[1287]: *** Including modules done ***
Nov 29 05:36:05 np0005539508.novalocal dracut[1287]: *** Installing kernel module dependencies ***
Nov 29 05:36:06 np0005539508.novalocal dracut[1287]: *** Installing kernel module dependencies done ***
Nov 29 05:36:06 np0005539508.novalocal dracut[1287]: *** Resolving executable dependencies ***
Nov 29 05:36:06 np0005539508.novalocal irqbalance[789]: Cannot change IRQ 35 affinity: Operation not permitted
Nov 29 05:36:06 np0005539508.novalocal irqbalance[789]: IRQ 35 affinity is now unmanaged
Nov 29 05:36:06 np0005539508.novalocal irqbalance[789]: Cannot change IRQ 33 affinity: Operation not permitted
Nov 29 05:36:06 np0005539508.novalocal irqbalance[789]: IRQ 33 affinity is now unmanaged
Nov 29 05:36:06 np0005539508.novalocal irqbalance[789]: Cannot change IRQ 31 affinity: Operation not permitted
Nov 29 05:36:06 np0005539508.novalocal irqbalance[789]: IRQ 31 affinity is now unmanaged
Nov 29 05:36:06 np0005539508.novalocal irqbalance[789]: Cannot change IRQ 28 affinity: Operation not permitted
Nov 29 05:36:06 np0005539508.novalocal irqbalance[789]: IRQ 28 affinity is now unmanaged
Nov 29 05:36:06 np0005539508.novalocal irqbalance[789]: Cannot change IRQ 34 affinity: Operation not permitted
Nov 29 05:36:06 np0005539508.novalocal irqbalance[789]: IRQ 34 affinity is now unmanaged
Nov 29 05:36:06 np0005539508.novalocal irqbalance[789]: Cannot change IRQ 32 affinity: Operation not permitted
Nov 29 05:36:06 np0005539508.novalocal irqbalance[789]: IRQ 32 affinity is now unmanaged
Nov 29 05:36:06 np0005539508.novalocal irqbalance[789]: Cannot change IRQ 30 affinity: Operation not permitted
Nov 29 05:36:06 np0005539508.novalocal irqbalance[789]: IRQ 30 affinity is now unmanaged
Nov 29 05:36:06 np0005539508.novalocal irqbalance[789]: Cannot change IRQ 29 affinity: Operation not permitted
Nov 29 05:36:06 np0005539508.novalocal irqbalance[789]: IRQ 29 affinity is now unmanaged
Nov 29 05:36:07 np0005539508.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 29 05:36:08 np0005539508.novalocal dracut[1287]: *** Resolving executable dependencies done ***
Nov 29 05:36:08 np0005539508.novalocal dracut[1287]: *** Generating early-microcode cpio image ***
Nov 29 05:36:08 np0005539508.novalocal dracut[1287]: *** Store current command line parameters ***
Nov 29 05:36:08 np0005539508.novalocal dracut[1287]: Stored kernel commandline:
Nov 29 05:36:08 np0005539508.novalocal dracut[1287]: No dracut internal kernel commandline stored in the initramfs
Nov 29 05:36:08 np0005539508.novalocal dracut[1287]: *** Install squash loader ***
Nov 29 05:36:09 np0005539508.novalocal dracut[1287]: *** Squashing the files inside the initramfs ***
Nov 29 05:36:10 np0005539508.novalocal dracut[1287]: *** Squashing the files inside the initramfs done ***
Nov 29 05:36:10 np0005539508.novalocal dracut[1287]: *** Creating image file '/boot/initramfs-5.14.0-642.el9.x86_64kdump.img' ***
Nov 29 05:36:10 np0005539508.novalocal dracut[1287]: *** Hardlinking files ***
Nov 29 05:36:10 np0005539508.novalocal dracut[1287]: Mode:           real
Nov 29 05:36:10 np0005539508.novalocal dracut[1287]: Files:          50
Nov 29 05:36:10 np0005539508.novalocal dracut[1287]: Linked:         0 files
Nov 29 05:36:10 np0005539508.novalocal dracut[1287]: Compared:       0 xattrs
Nov 29 05:36:10 np0005539508.novalocal dracut[1287]: Compared:       0 files
Nov 29 05:36:10 np0005539508.novalocal dracut[1287]: Saved:          0 B
Nov 29 05:36:10 np0005539508.novalocal dracut[1287]: Duration:       0.000550 seconds
Nov 29 05:36:10 np0005539508.novalocal dracut[1287]: *** Hardlinking files done ***
Nov 29 05:36:10 np0005539508.novalocal dracut[1287]: *** Creating initramfs image file '/boot/initramfs-5.14.0-642.el9.x86_64kdump.img' done ***
Nov 29 05:36:11 np0005539508.novalocal kdumpctl[1016]: kdump: kexec: loaded kdump kernel
Nov 29 05:36:11 np0005539508.novalocal kdumpctl[1016]: kdump: Starting kdump: [OK]
Nov 29 05:36:11 np0005539508.novalocal systemd[1]: Finished Crash recovery kernel arming.
Nov 29 05:36:11 np0005539508.novalocal systemd[1]: Startup finished in 1.781s (kernel) + 2.583s (initrd) + 17.578s (userspace) = 21.943s.
Nov 29 05:36:27 np0005539508.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 29 05:36:40 np0005539508.novalocal sshd-session[4298]: Received disconnect from 45.78.217.106 port 41580:11: Bye Bye [preauth]
Nov 29 05:36:40 np0005539508.novalocal sshd-session[4298]: Disconnected from authenticating user root 45.78.217.106 port 41580 [preauth]
Nov 29 05:37:40 np0005539508.novalocal sshd-session[4300]: Accepted publickey for zuul from 38.102.83.114 port 42070 ssh2: RSA SHA256:zhs3MiW0JhxzckYcMHQES8SMYHj1iGcomnyzmbiwor8
Nov 29 05:37:40 np0005539508.novalocal systemd[1]: Created slice User Slice of UID 1000.
Nov 29 05:37:40 np0005539508.novalocal systemd[1]: Starting User Runtime Directory /run/user/1000...
Nov 29 05:37:40 np0005539508.novalocal systemd-logind[797]: New session 1 of user zuul.
Nov 29 05:37:40 np0005539508.novalocal systemd[1]: Finished User Runtime Directory /run/user/1000.
Nov 29 05:37:40 np0005539508.novalocal systemd[1]: Starting User Manager for UID 1000...
Nov 29 05:37:40 np0005539508.novalocal systemd[4304]: pam_unix(systemd-user:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 05:37:40 np0005539508.novalocal systemd[4304]: Queued start job for default target Main User Target.
Nov 29 05:37:40 np0005539508.novalocal systemd[4304]: Created slice User Application Slice.
Nov 29 05:37:40 np0005539508.novalocal systemd[4304]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 29 05:37:40 np0005539508.novalocal systemd[4304]: Started Daily Cleanup of User's Temporary Directories.
Nov 29 05:37:40 np0005539508.novalocal systemd[4304]: Reached target Paths.
Nov 29 05:37:40 np0005539508.novalocal systemd[4304]: Reached target Timers.
Nov 29 05:37:40 np0005539508.novalocal systemd[4304]: Starting D-Bus User Message Bus Socket...
Nov 29 05:37:40 np0005539508.novalocal systemd[4304]: Starting Create User's Volatile Files and Directories...
Nov 29 05:37:40 np0005539508.novalocal systemd[4304]: Finished Create User's Volatile Files and Directories.
Nov 29 05:37:40 np0005539508.novalocal systemd[4304]: Listening on D-Bus User Message Bus Socket.
Nov 29 05:37:40 np0005539508.novalocal systemd[4304]: Reached target Sockets.
Nov 29 05:37:40 np0005539508.novalocal systemd[4304]: Reached target Basic System.
Nov 29 05:37:40 np0005539508.novalocal systemd[4304]: Reached target Main User Target.
Nov 29 05:37:40 np0005539508.novalocal systemd[4304]: Startup finished in 100ms.
Nov 29 05:37:40 np0005539508.novalocal systemd[1]: Started User Manager for UID 1000.
Nov 29 05:37:40 np0005539508.novalocal systemd[1]: Started Session 1 of User zuul.
Nov 29 05:37:40 np0005539508.novalocal sshd-session[4300]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 05:37:41 np0005539508.novalocal python3[4386]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 05:37:44 np0005539508.novalocal python3[4414]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 05:37:52 np0005539508.novalocal python3[4472]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 05:37:53 np0005539508.novalocal python3[4512]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Nov 29 05:37:55 np0005539508.novalocal python3[4538]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCrxzXgpPmVv8+7+5w1Oy1RsXOPeqdxTcUlq37d0RcYulAAKXWla/qJwAX46v5xh/Mg4GnRpk77lvDWcVnOQjFYQg3OeLmFgDDNPV0YL7URmIe/MvgcqM+Kx7/SQjk+hEt7rUIqkFUjeREX60T5eTEMANFgJrljqZcBTMgYr67x4v7oFELzKuZIO0SCAprJ9NYmdRaC+DsjZjU+DuFdHBnfZCpgkTFMCda2FAS9BneAVOIMCBu5RgNVJXeAgIsPX9GNX3qDJMKOluQLOW++2gbue3S1Nrs1GMPm+IPRD4yWc9eZs1tpR1jdP1BEPBpyQRQlUn4z7BUdEogSzYiXCSmqzN1o/R3mdi16bG8e2lHve5MQFABPko8KsgVOJu0H7b7wGo/oGdXH7sdlKuGoWxWyTFcq3RcVkaVgjKtt6zeswkrpxMUv9/6NXPrhIWqdQm/wVw0Pv2p98yq10QRPyBv5yI8zcNjxueUl3aM8SZML87E6lhkUFFdAuVof+Sl5Pz8= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 05:37:55 np0005539508.novalocal python3[4562]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:37:56 np0005539508.novalocal python3[4661]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 05:37:56 np0005539508.novalocal python3[4732]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764394676.1403496-251-129804431246993/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=601e897125784122ba5d7472ada57b1d_id_rsa follow=False checksum=5ac8bea8bfb8f348688bf24843ddb1285b2d351d backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:37:57 np0005539508.novalocal python3[4855]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 05:37:57 np0005539508.novalocal python3[4926]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764394677.150716-306-42429902461958/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=601e897125784122ba5d7472ada57b1d_id_rsa.pub follow=False checksum=48b31d706687f3385690285b8caeaea67ea8286c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:37:59 np0005539508.novalocal python3[4974]: ansible-ping Invoked with data=pong
Nov 29 05:38:00 np0005539508.novalocal python3[4998]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 05:38:02 np0005539508.novalocal python3[5056]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Nov 29 05:38:03 np0005539508.novalocal python3[5088]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:38:03 np0005539508.novalocal python3[5112]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:38:03 np0005539508.novalocal python3[5136]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:38:05 np0005539508.novalocal python3[5160]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:38:05 np0005539508.novalocal python3[5184]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:38:05 np0005539508.novalocal python3[5208]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:38:07 np0005539508.novalocal sudo[5232]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ghufphelxivvqjbmoyyjafjsagdnapms ; /usr/bin/python3'
Nov 29 05:38:07 np0005539508.novalocal sudo[5232]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:38:07 np0005539508.novalocal python3[5234]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:38:07 np0005539508.novalocal sudo[5232]: pam_unix(sudo:session): session closed for user root
Nov 29 05:38:07 np0005539508.novalocal sudo[5310]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-owbgqubixfkxlvvqdjjkvrwioirqotuj ; /usr/bin/python3'
Nov 29 05:38:07 np0005539508.novalocal sudo[5310]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:38:08 np0005539508.novalocal python3[5312]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 05:38:08 np0005539508.novalocal sudo[5310]: pam_unix(sudo:session): session closed for user root
Nov 29 05:38:08 np0005539508.novalocal sudo[5383]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tksfpwfasynkkalfdxpuvztqncezdohq ; /usr/bin/python3'
Nov 29 05:38:08 np0005539508.novalocal sudo[5383]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:38:08 np0005539508.novalocal python3[5385]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764394687.5734053-31-214443851879255/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:38:08 np0005539508.novalocal sudo[5383]: pam_unix(sudo:session): session closed for user root
Nov 29 05:38:09 np0005539508.novalocal python3[5433]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 05:38:09 np0005539508.novalocal python3[5457]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 05:38:09 np0005539508.novalocal python3[5481]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 05:38:10 np0005539508.novalocal python3[5505]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 05:38:10 np0005539508.novalocal python3[5529]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 05:38:10 np0005539508.novalocal python3[5553]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 05:38:10 np0005539508.novalocal python3[5577]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 05:38:11 np0005539508.novalocal python3[5601]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 05:38:11 np0005539508.novalocal python3[5625]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 05:38:11 np0005539508.novalocal python3[5649]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 05:38:12 np0005539508.novalocal python3[5673]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 05:38:12 np0005539508.novalocal python3[5697]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 05:38:12 np0005539508.novalocal python3[5721]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 05:38:12 np0005539508.novalocal python3[5745]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 05:38:13 np0005539508.novalocal python3[5769]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 05:38:13 np0005539508.novalocal python3[5793]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 05:38:13 np0005539508.novalocal python3[5817]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 05:38:14 np0005539508.novalocal python3[5841]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 05:38:14 np0005539508.novalocal python3[5865]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 05:38:14 np0005539508.novalocal python3[5889]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 05:38:14 np0005539508.novalocal python3[5913]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 05:38:15 np0005539508.novalocal python3[5937]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 05:38:15 np0005539508.novalocal python3[5961]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 05:38:15 np0005539508.novalocal python3[5985]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 05:38:15 np0005539508.novalocal python3[6009]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 05:38:16 np0005539508.novalocal python3[6033]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 05:38:18 np0005539508.novalocal sudo[6057]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-njczxdxhhcimusupihtrwrnxbdpfvfot ; /usr/bin/python3'
Nov 29 05:38:18 np0005539508.novalocal sudo[6057]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:38:19 np0005539508.novalocal python3[6059]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Nov 29 05:38:19 np0005539508.novalocal systemd[1]: Starting Time & Date Service...
Nov 29 05:38:19 np0005539508.novalocal systemd[1]: Started Time & Date Service.
Nov 29 05:38:19 np0005539508.novalocal systemd-timedated[6061]: Changed time zone to 'UTC' (UTC).
Nov 29 05:38:19 np0005539508.novalocal sudo[6057]: pam_unix(sudo:session): session closed for user root
Nov 29 05:38:19 np0005539508.novalocal sudo[6088]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oiuicvuzpmvvzrkkicxomrwybhlvjxbw ; /usr/bin/python3'
Nov 29 05:38:19 np0005539508.novalocal sudo[6088]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:38:19 np0005539508.novalocal python3[6090]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:38:19 np0005539508.novalocal sudo[6088]: pam_unix(sudo:session): session closed for user root
Nov 29 05:38:20 np0005539508.novalocal python3[6166]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 05:38:20 np0005539508.novalocal python3[6237]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1764394700.0193212-251-171256774323141/source _original_basename=tmpmtniz78x follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:38:21 np0005539508.novalocal python3[6337]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 05:38:21 np0005539508.novalocal python3[6408]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1764394701.1279824-301-78693874424833/source _original_basename=tmpkukt5feb follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:38:22 np0005539508.novalocal sudo[6508]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ibppsazmszxsqviugedxovkjmgmjpmya ; /usr/bin/python3'
Nov 29 05:38:22 np0005539508.novalocal sudo[6508]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:38:22 np0005539508.novalocal python3[6510]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 05:38:22 np0005539508.novalocal sudo[6508]: pam_unix(sudo:session): session closed for user root
Nov 29 05:38:23 np0005539508.novalocal sudo[6581]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rmbrlluccgamokqjntgqkjjruqcqhskv ; /usr/bin/python3'
Nov 29 05:38:23 np0005539508.novalocal sudo[6581]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:38:23 np0005539508.novalocal python3[6583]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1764394702.4326034-381-247193518684047/source _original_basename=tmpbh_psin_ follow=False checksum=0a5264336eaf669ce906803fabc64043ef3757da backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:38:23 np0005539508.novalocal sudo[6581]: pam_unix(sudo:session): session closed for user root
Nov 29 05:38:23 np0005539508.novalocal python3[6631]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 05:38:23 np0005539508.novalocal python3[6657]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 05:38:24 np0005539508.novalocal sudo[6735]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zrazphalqtpbsluwhibcipptmgdctldb ; /usr/bin/python3'
Nov 29 05:38:24 np0005539508.novalocal sudo[6735]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:38:24 np0005539508.novalocal python3[6737]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 05:38:24 np0005539508.novalocal sudo[6735]: pam_unix(sudo:session): session closed for user root
Nov 29 05:38:24 np0005539508.novalocal sudo[6808]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfdsnwpfzrcrgcuffhxtqsqexljjoryo ; /usr/bin/python3'
Nov 29 05:38:24 np0005539508.novalocal sudo[6808]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:38:25 np0005539508.novalocal python3[6810]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1764394704.3634124-451-108775467523355/source _original_basename=tmpotslm687 follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:38:25 np0005539508.novalocal sudo[6808]: pam_unix(sudo:session): session closed for user root
Nov 29 05:38:25 np0005539508.novalocal sudo[6859]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nhxoboirzkwtnpvtvxbjaqkuxiuaxiiq ; /usr/bin/python3'
Nov 29 05:38:25 np0005539508.novalocal sudo[6859]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:38:25 np0005539508.novalocal python3[6861]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163ef9-e89a-3d5b-5bb0-00000000001f-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 05:38:25 np0005539508.novalocal sudo[6859]: pam_unix(sudo:session): session closed for user root
Nov 29 05:38:26 np0005539508.novalocal python3[6889]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env
                                                       _uses_shell=True zuul_log_id=fa163ef9-e89a-3d5b-5bb0-000000000020-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Nov 29 05:38:27 np0005539508.novalocal python3[6917]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:38:45 np0005539508.novalocal sudo[6941]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdzgoposoqlhqftxjtncmdnowdxqhrve ; /usr/bin/python3'
Nov 29 05:38:45 np0005539508.novalocal sudo[6941]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:38:46 np0005539508.novalocal python3[6943]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:38:46 np0005539508.novalocal sudo[6941]: pam_unix(sudo:session): session closed for user root
Nov 29 05:38:49 np0005539508.novalocal systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 29 05:39:27 np0005539508.novalocal kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Nov 29 05:39:27 np0005539508.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Nov 29 05:39:27 np0005539508.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Nov 29 05:39:27 np0005539508.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Nov 29 05:39:27 np0005539508.novalocal kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Nov 29 05:39:27 np0005539508.novalocal kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Nov 29 05:39:27 np0005539508.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Nov 29 05:39:27 np0005539508.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Nov 29 05:39:27 np0005539508.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Nov 29 05:39:27 np0005539508.novalocal kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Nov 29 05:39:27 np0005539508.novalocal NetworkManager[861]: <info>  [1764394767.5718] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Nov 29 05:39:27 np0005539508.novalocal systemd-udevd[6949]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 05:39:27 np0005539508.novalocal NetworkManager[861]: <info>  [1764394767.5922] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 05:39:27 np0005539508.novalocal NetworkManager[861]: <info>  [1764394767.5961] settings: (eth1): created default wired connection 'Wired connection 1'
Nov 29 05:39:27 np0005539508.novalocal NetworkManager[861]: <info>  [1764394767.5968] device (eth1): carrier: link connected
Nov 29 05:39:27 np0005539508.novalocal NetworkManager[861]: <info>  [1764394767.5971] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Nov 29 05:39:27 np0005539508.novalocal NetworkManager[861]: <info>  [1764394767.5983] policy: auto-activating connection 'Wired connection 1' (ca3faf74-3a1e-393e-b2c9-9f72990abe6a)
Nov 29 05:39:27 np0005539508.novalocal NetworkManager[861]: <info>  [1764394767.5990] device (eth1): Activation: starting connection 'Wired connection 1' (ca3faf74-3a1e-393e-b2c9-9f72990abe6a)
Nov 29 05:39:27 np0005539508.novalocal NetworkManager[861]: <info>  [1764394767.5991] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 05:39:27 np0005539508.novalocal NetworkManager[861]: <info>  [1764394767.5995] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 05:39:27 np0005539508.novalocal NetworkManager[861]: <info>  [1764394767.6000] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 05:39:27 np0005539508.novalocal NetworkManager[861]: <info>  [1764394767.6010] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Nov 29 05:39:28 np0005539508.novalocal python3[6975]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163ef9-e89a-4e5a-44df-000000000128-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 05:39:30 np0005539508.novalocal sshd-session[6946]: Received disconnect from 45.78.217.106 port 42688:11: Bye Bye [preauth]
Nov 29 05:39:30 np0005539508.novalocal sshd-session[6946]: Disconnected from 45.78.217.106 port 42688 [preauth]
Nov 29 05:39:38 np0005539508.novalocal sudo[7053]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-flyxjfnvhyxpyvvcqfvjhvuwtguvowwd ; OS_CLOUD=vexxhost /usr/bin/python3'
Nov 29 05:39:38 np0005539508.novalocal sudo[7053]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:39:38 np0005539508.novalocal python3[7055]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 05:39:38 np0005539508.novalocal sudo[7053]: pam_unix(sudo:session): session closed for user root
Nov 29 05:39:38 np0005539508.novalocal sudo[7126]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-epcesfjgraysbzjkvuasxkqrdnzjdzwn ; OS_CLOUD=vexxhost /usr/bin/python3'
Nov 29 05:39:38 np0005539508.novalocal sudo[7126]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:39:38 np0005539508.novalocal python3[7128]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764394777.9972265-104-249937094941339/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=238071955a4d7097a928b7c267e7f2bab5a0e0d2 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:39:38 np0005539508.novalocal sudo[7126]: pam_unix(sudo:session): session closed for user root
Nov 29 05:39:39 np0005539508.novalocal sudo[7176]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cchuwoofrengsdhynfvhopdksacxpjku ; OS_CLOUD=vexxhost /usr/bin/python3'
Nov 29 05:39:39 np0005539508.novalocal sudo[7176]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:39:39 np0005539508.novalocal python3[7178]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 05:39:39 np0005539508.novalocal systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Nov 29 05:39:39 np0005539508.novalocal systemd[1]: Stopped Network Manager Wait Online.
Nov 29 05:39:39 np0005539508.novalocal systemd[1]: Stopping Network Manager Wait Online...
Nov 29 05:39:39 np0005539508.novalocal systemd[1]: Stopping Network Manager...
Nov 29 05:39:39 np0005539508.novalocal NetworkManager[861]: <info>  [1764394779.6582] caught SIGTERM, shutting down normally.
Nov 29 05:39:39 np0005539508.novalocal NetworkManager[861]: <info>  [1764394779.6594] dhcp4 (eth0): canceled DHCP transaction
Nov 29 05:39:39 np0005539508.novalocal NetworkManager[861]: <info>  [1764394779.6594] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 29 05:39:39 np0005539508.novalocal NetworkManager[861]: <info>  [1764394779.6594] dhcp4 (eth0): state changed no lease
Nov 29 05:39:39 np0005539508.novalocal NetworkManager[861]: <info>  [1764394779.6598] manager: NetworkManager state is now CONNECTING
Nov 29 05:39:39 np0005539508.novalocal NetworkManager[861]: <info>  [1764394779.6659] dhcp4 (eth1): canceled DHCP transaction
Nov 29 05:39:39 np0005539508.novalocal NetworkManager[861]: <info>  [1764394779.6660] dhcp4 (eth1): state changed no lease
Nov 29 05:39:39 np0005539508.novalocal NetworkManager[861]: <info>  [1764394779.6758] exiting (success)
Nov 29 05:39:39 np0005539508.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 29 05:39:39 np0005539508.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 29 05:39:39 np0005539508.novalocal systemd[1]: NetworkManager.service: Deactivated successfully.
Nov 29 05:39:39 np0005539508.novalocal systemd[1]: Stopped Network Manager.
Nov 29 05:39:39 np0005539508.novalocal systemd[1]: NetworkManager.service: Consumed 1.628s CPU time, 9.9M memory peak.
Nov 29 05:39:39 np0005539508.novalocal systemd[1]: Starting Network Manager...
Nov 29 05:39:39 np0005539508.novalocal NetworkManager[7189]: <info>  [1764394779.7444] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:b7b17a39-22f5-4f4f-9861-b1bcbadcfe77)
Nov 29 05:39:39 np0005539508.novalocal NetworkManager[7189]: <info>  [1764394779.7449] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Nov 29 05:39:39 np0005539508.novalocal NetworkManager[7189]: <info>  [1764394779.7512] manager[0x55f814f93070]: monitoring kernel firmware directory '/lib/firmware'.
Nov 29 05:39:39 np0005539508.novalocal systemd[1]: Starting Hostname Service...
Nov 29 05:39:39 np0005539508.novalocal systemd[1]: Started Hostname Service.
Nov 29 05:39:39 np0005539508.novalocal NetworkManager[7189]: <info>  [1764394779.8665] hostname: hostname: using hostnamed
Nov 29 05:39:39 np0005539508.novalocal NetworkManager[7189]: <info>  [1764394779.8667] hostname: static hostname changed from (none) to "np0005539508.novalocal"
Nov 29 05:39:39 np0005539508.novalocal NetworkManager[7189]: <info>  [1764394779.8680] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Nov 29 05:39:39 np0005539508.novalocal NetworkManager[7189]: <info>  [1764394779.8688] manager[0x55f814f93070]: rfkill: Wi-Fi hardware radio set enabled
Nov 29 05:39:39 np0005539508.novalocal NetworkManager[7189]: <info>  [1764394779.8690] manager[0x55f814f93070]: rfkill: WWAN hardware radio set enabled
Nov 29 05:39:39 np0005539508.novalocal NetworkManager[7189]: <info>  [1764394779.8739] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Nov 29 05:39:39 np0005539508.novalocal NetworkManager[7189]: <info>  [1764394779.8740] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Nov 29 05:39:39 np0005539508.novalocal NetworkManager[7189]: <info>  [1764394779.8741] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Nov 29 05:39:39 np0005539508.novalocal NetworkManager[7189]: <info>  [1764394779.8741] manager: Networking is enabled by state file
Nov 29 05:39:39 np0005539508.novalocal NetworkManager[7189]: <info>  [1764394779.8745] settings: Loaded settings plugin: keyfile (internal)
Nov 29 05:39:39 np0005539508.novalocal NetworkManager[7189]: <info>  [1764394779.8752] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Nov 29 05:39:39 np0005539508.novalocal NetworkManager[7189]: <info>  [1764394779.8793] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Nov 29 05:39:39 np0005539508.novalocal NetworkManager[7189]: <info>  [1764394779.8807] dhcp: init: Using DHCP client 'internal'
Nov 29 05:39:39 np0005539508.novalocal NetworkManager[7189]: <info>  [1764394779.8812] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Nov 29 05:39:39 np0005539508.novalocal NetworkManager[7189]: <info>  [1764394779.8822] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 05:39:39 np0005539508.novalocal NetworkManager[7189]: <info>  [1764394779.8830] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Nov 29 05:39:39 np0005539508.novalocal NetworkManager[7189]: <info>  [1764394779.8845] device (lo): Activation: starting connection 'lo' (1e70ab37-1fe6-47fd-afad-f3ac90d7657d)
Nov 29 05:39:39 np0005539508.novalocal NetworkManager[7189]: <info>  [1764394779.8858] device (eth0): carrier: link connected
Nov 29 05:39:39 np0005539508.novalocal NetworkManager[7189]: <info>  [1764394779.8866] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Nov 29 05:39:39 np0005539508.novalocal NetworkManager[7189]: <info>  [1764394779.8875] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Nov 29 05:39:39 np0005539508.novalocal NetworkManager[7189]: <info>  [1764394779.8877] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 29 05:39:39 np0005539508.novalocal NetworkManager[7189]: <info>  [1764394779.8890] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 29 05:39:39 np0005539508.novalocal NetworkManager[7189]: <info>  [1764394779.8907] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 29 05:39:39 np0005539508.novalocal NetworkManager[7189]: <info>  [1764394779.8921] device (eth1): carrier: link connected
Nov 29 05:39:39 np0005539508.novalocal NetworkManager[7189]: <info>  [1764394779.8930] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Nov 29 05:39:39 np0005539508.novalocal NetworkManager[7189]: <info>  [1764394779.8943] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (ca3faf74-3a1e-393e-b2c9-9f72990abe6a) (indicated)
Nov 29 05:39:39 np0005539508.novalocal NetworkManager[7189]: <info>  [1764394779.8945] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 29 05:39:39 np0005539508.novalocal NetworkManager[7189]: <info>  [1764394779.8957] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 29 05:39:39 np0005539508.novalocal NetworkManager[7189]: <info>  [1764394779.8971] device (eth1): Activation: starting connection 'Wired connection 1' (ca3faf74-3a1e-393e-b2c9-9f72990abe6a)
Nov 29 05:39:39 np0005539508.novalocal systemd[1]: Started Network Manager.
Nov 29 05:39:39 np0005539508.novalocal NetworkManager[7189]: <info>  [1764394779.8984] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Nov 29 05:39:39 np0005539508.novalocal NetworkManager[7189]: <info>  [1764394779.9018] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Nov 29 05:39:39 np0005539508.novalocal NetworkManager[7189]: <info>  [1764394779.9025] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Nov 29 05:39:39 np0005539508.novalocal NetworkManager[7189]: <info>  [1764394779.9029] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Nov 29 05:39:39 np0005539508.novalocal NetworkManager[7189]: <info>  [1764394779.9033] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 29 05:39:39 np0005539508.novalocal NetworkManager[7189]: <info>  [1764394779.9037] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 29 05:39:39 np0005539508.novalocal NetworkManager[7189]: <info>  [1764394779.9042] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 29 05:39:39 np0005539508.novalocal NetworkManager[7189]: <info>  [1764394779.9045] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 29 05:39:39 np0005539508.novalocal NetworkManager[7189]: <info>  [1764394779.9049] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Nov 29 05:39:39 np0005539508.novalocal NetworkManager[7189]: <info>  [1764394779.9059] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 29 05:39:39 np0005539508.novalocal NetworkManager[7189]: <info>  [1764394779.9063] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 29 05:39:39 np0005539508.novalocal NetworkManager[7189]: <info>  [1764394779.9075] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 29 05:39:39 np0005539508.novalocal NetworkManager[7189]: <info>  [1764394779.9079] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Nov 29 05:39:39 np0005539508.novalocal NetworkManager[7189]: <info>  [1764394779.9102] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Nov 29 05:39:39 np0005539508.novalocal NetworkManager[7189]: <info>  [1764394779.9109] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Nov 29 05:39:39 np0005539508.novalocal NetworkManager[7189]: <info>  [1764394779.9119] device (lo): Activation: successful, device activated.
Nov 29 05:39:39 np0005539508.novalocal systemd[1]: Starting Network Manager Wait Online...
Nov 29 05:39:39 np0005539508.novalocal NetworkManager[7189]: <info>  [1764394779.9137] dhcp4 (eth0): state changed new lease, address=38.102.83.22
Nov 29 05:39:39 np0005539508.novalocal NetworkManager[7189]: <info>  [1764394779.9148] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Nov 29 05:39:39 np0005539508.novalocal NetworkManager[7189]: <info>  [1764394779.9241] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 29 05:39:39 np0005539508.novalocal NetworkManager[7189]: <info>  [1764394779.9279] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 29 05:39:39 np0005539508.novalocal NetworkManager[7189]: <info>  [1764394779.9281] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 29 05:39:39 np0005539508.novalocal NetworkManager[7189]: <info>  [1764394779.9285] manager: NetworkManager state is now CONNECTED_SITE
Nov 29 05:39:39 np0005539508.novalocal NetworkManager[7189]: <info>  [1764394779.9289] device (eth0): Activation: successful, device activated.
Nov 29 05:39:39 np0005539508.novalocal NetworkManager[7189]: <info>  [1764394779.9295] manager: NetworkManager state is now CONNECTED_GLOBAL
Nov 29 05:39:39 np0005539508.novalocal sudo[7176]: pam_unix(sudo:session): session closed for user root
Nov 29 05:39:40 np0005539508.novalocal python3[7264]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163ef9-e89a-4e5a-44df-0000000000bd-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 05:39:50 np0005539508.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 29 05:40:09 np0005539508.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 29 05:40:24 np0005539508.novalocal NetworkManager[7189]: <info>  [1764394824.7602] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 29 05:40:24 np0005539508.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 29 05:40:24 np0005539508.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 29 05:40:24 np0005539508.novalocal NetworkManager[7189]: <info>  [1764394824.7899] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 29 05:40:24 np0005539508.novalocal NetworkManager[7189]: <info>  [1764394824.7902] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 29 05:40:24 np0005539508.novalocal NetworkManager[7189]: <info>  [1764394824.7912] device (eth1): Activation: successful, device activated.
Nov 29 05:40:24 np0005539508.novalocal NetworkManager[7189]: <info>  [1764394824.7921] manager: startup complete
Nov 29 05:40:24 np0005539508.novalocal NetworkManager[7189]: <info>  [1764394824.7923] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Nov 29 05:40:24 np0005539508.novalocal NetworkManager[7189]: <warn>  [1764394824.7931] device (eth1): Activation: failed for connection 'Wired connection 1'
Nov 29 05:40:24 np0005539508.novalocal NetworkManager[7189]: <info>  [1764394824.7942] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Nov 29 05:40:24 np0005539508.novalocal systemd[1]: Finished Network Manager Wait Online.
Nov 29 05:40:24 np0005539508.novalocal NetworkManager[7189]: <info>  [1764394824.8059] dhcp4 (eth1): canceled DHCP transaction
Nov 29 05:40:24 np0005539508.novalocal NetworkManager[7189]: <info>  [1764394824.8060] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Nov 29 05:40:24 np0005539508.novalocal NetworkManager[7189]: <info>  [1764394824.8060] dhcp4 (eth1): state changed no lease
Nov 29 05:40:24 np0005539508.novalocal NetworkManager[7189]: <info>  [1764394824.8082] policy: auto-activating connection 'ci-private-network' (b3ca7565-e6c0-5ba2-a076-c2cd58810e8e)
Nov 29 05:40:24 np0005539508.novalocal NetworkManager[7189]: <info>  [1764394824.8089] device (eth1): Activation: starting connection 'ci-private-network' (b3ca7565-e6c0-5ba2-a076-c2cd58810e8e)
Nov 29 05:40:24 np0005539508.novalocal NetworkManager[7189]: <info>  [1764394824.8090] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 05:40:24 np0005539508.novalocal NetworkManager[7189]: <info>  [1764394824.8094] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 05:40:24 np0005539508.novalocal NetworkManager[7189]: <info>  [1764394824.8104] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 05:40:24 np0005539508.novalocal NetworkManager[7189]: <info>  [1764394824.8116] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 05:40:24 np0005539508.novalocal NetworkManager[7189]: <info>  [1764394824.8161] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 05:40:24 np0005539508.novalocal NetworkManager[7189]: <info>  [1764394824.8164] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 05:40:24 np0005539508.novalocal NetworkManager[7189]: <info>  [1764394824.8174] device (eth1): Activation: successful, device activated.
Nov 29 05:40:28 np0005539508.novalocal systemd[4304]: Starting Mark boot as successful...
Nov 29 05:40:28 np0005539508.novalocal systemd[4304]: Finished Mark boot as successful.
Nov 29 05:40:34 np0005539508.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 29 05:40:40 np0005539508.novalocal sshd-session[4313]: Received disconnect from 38.102.83.114 port 42070:11: disconnected by user
Nov 29 05:40:40 np0005539508.novalocal sshd-session[4313]: Disconnected from user zuul 38.102.83.114 port 42070
Nov 29 05:40:40 np0005539508.novalocal sshd-session[4300]: pam_unix(sshd:session): session closed for user zuul
Nov 29 05:40:40 np0005539508.novalocal systemd-logind[797]: Session 1 logged out. Waiting for processes to exit.
Nov 29 05:41:43 np0005539508.novalocal sshd-session[7293]: Accepted publickey for zuul from 38.102.83.114 port 49658 ssh2: RSA SHA256:MGJJb6X2bjkH8oWT85dgz2a/TwKBbh3/GDOWF3tnPlY
Nov 29 05:41:43 np0005539508.novalocal systemd-logind[797]: New session 3 of user zuul.
Nov 29 05:41:43 np0005539508.novalocal systemd[1]: Started Session 3 of User zuul.
Nov 29 05:41:43 np0005539508.novalocal sshd-session[7293]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 05:41:43 np0005539508.novalocal sudo[7372]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vfijmvvhfyxpgidvxociyforippxgfhb ; OS_CLOUD=vexxhost /usr/bin/python3'
Nov 29 05:41:43 np0005539508.novalocal sudo[7372]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:41:43 np0005539508.novalocal python3[7374]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 05:41:43 np0005539508.novalocal sudo[7372]: pam_unix(sudo:session): session closed for user root
Nov 29 05:41:44 np0005539508.novalocal sudo[7445]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eovblyuttrqglrjwtxukesxvzfattlpk ; OS_CLOUD=vexxhost /usr/bin/python3'
Nov 29 05:41:44 np0005539508.novalocal sudo[7445]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:41:44 np0005539508.novalocal python3[7447]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764394903.6208549-373-276086504366831/source _original_basename=tmpaonanc0i follow=False checksum=95c43167cb69fbe3f3b9eff0c3ecf63c2bbd5b70 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:41:44 np0005539508.novalocal sudo[7445]: pam_unix(sudo:session): session closed for user root
Nov 29 05:41:48 np0005539508.novalocal sshd-session[7296]: Connection closed by 38.102.83.114 port 49658
Nov 29 05:41:48 np0005539508.novalocal sshd-session[7293]: pam_unix(sshd:session): session closed for user zuul
Nov 29 05:41:48 np0005539508.novalocal systemd-logind[797]: Session 3 logged out. Waiting for processes to exit.
Nov 29 05:41:48 np0005539508.novalocal systemd[1]: session-3.scope: Deactivated successfully.
Nov 29 05:41:48 np0005539508.novalocal systemd-logind[797]: Removed session 3.
Nov 29 05:42:12 np0005539508.novalocal sshd-session[7474]: Received disconnect from 45.78.217.106 port 34142:11: Bye Bye [preauth]
Nov 29 05:42:12 np0005539508.novalocal sshd-session[7474]: Disconnected from 45.78.217.106 port 34142 [preauth]
Nov 29 05:43:14 np0005539508.novalocal sshd-session[7476]: Received disconnect from 193.46.255.217 port 44056:11:  [preauth]
Nov 29 05:43:14 np0005539508.novalocal sshd-session[7476]: Disconnected from authenticating user root 193.46.255.217 port 44056 [preauth]
Nov 29 05:43:28 np0005539508.novalocal systemd[4304]: Created slice User Background Tasks Slice.
Nov 29 05:43:28 np0005539508.novalocal systemd[4304]: Starting Cleanup of User's Temporary Files and Directories...
Nov 29 05:43:28 np0005539508.novalocal systemd[4304]: Finished Cleanup of User's Temporary Files and Directories.
Nov 29 05:44:44 np0005539508.novalocal sshd-session[7479]: Invalid user devuser from 45.78.217.106 port 42206
Nov 29 05:44:46 np0005539508.novalocal sshd-session[7479]: Received disconnect from 45.78.217.106 port 42206:11: Bye Bye [preauth]
Nov 29 05:44:46 np0005539508.novalocal sshd-session[7479]: Disconnected from invalid user devuser 45.78.217.106 port 42206 [preauth]
Nov 29 05:47:03 np0005539508.novalocal sshd-session[7483]: Accepted publickey for zuul from 38.102.83.114 port 50136 ssh2: RSA SHA256:MGJJb6X2bjkH8oWT85dgz2a/TwKBbh3/GDOWF3tnPlY
Nov 29 05:47:03 np0005539508.novalocal systemd-logind[797]: New session 4 of user zuul.
Nov 29 05:47:03 np0005539508.novalocal systemd[1]: Started Session 4 of User zuul.
Nov 29 05:47:03 np0005539508.novalocal sshd-session[7483]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 05:47:03 np0005539508.novalocal sudo[7510]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qmpxzjfdbljfxwkqflgegdrwqvdbtyox ; /usr/bin/python3'
Nov 29 05:47:03 np0005539508.novalocal sudo[7510]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:47:03 np0005539508.novalocal python3[7512]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda
                                                       _uses_shell=True zuul_log_id=fa163ef9-e89a-b110-1686-000000000ca2-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 05:47:03 np0005539508.novalocal sudo[7510]: pam_unix(sudo:session): session closed for user root
Nov 29 05:47:04 np0005539508.novalocal sudo[7539]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dkcgyffkkbeevvhbndvwgjttbvgvqbix ; /usr/bin/python3'
Nov 29 05:47:04 np0005539508.novalocal sudo[7539]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:47:04 np0005539508.novalocal python3[7541]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:47:04 np0005539508.novalocal sudo[7539]: pam_unix(sudo:session): session closed for user root
Nov 29 05:47:04 np0005539508.novalocal sudo[7565]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qrbsqccxtdbnerrzszeivrmhxktyjgks ; /usr/bin/python3'
Nov 29 05:47:04 np0005539508.novalocal sudo[7565]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:47:04 np0005539508.novalocal python3[7567]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:47:04 np0005539508.novalocal sudo[7565]: pam_unix(sudo:session): session closed for user root
Nov 29 05:47:04 np0005539508.novalocal sudo[7591]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ppgdgzkhfxmwkmwzqruulfeclzdicoml ; /usr/bin/python3'
Nov 29 05:47:04 np0005539508.novalocal sudo[7591]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:47:04 np0005539508.novalocal python3[7593]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:47:04 np0005539508.novalocal sudo[7591]: pam_unix(sudo:session): session closed for user root
Nov 29 05:47:04 np0005539508.novalocal sudo[7617]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-larlefogpgrrvwxogiiqxyjyzfzhucug ; /usr/bin/python3'
Nov 29 05:47:04 np0005539508.novalocal sudo[7617]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:47:05 np0005539508.novalocal python3[7619]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:47:05 np0005539508.novalocal sudo[7617]: pam_unix(sudo:session): session closed for user root
Nov 29 05:47:05 np0005539508.novalocal sudo[7643]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kpzrruridtgntsroxnhbslwwulgypbrd ; /usr/bin/python3'
Nov 29 05:47:05 np0005539508.novalocal sudo[7643]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:47:05 np0005539508.novalocal python3[7645]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:47:05 np0005539508.novalocal sudo[7643]: pam_unix(sudo:session): session closed for user root
Nov 29 05:47:05 np0005539508.novalocal sudo[7721]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmvkurgybqphxrkppocxfeyjmddfyjsc ; /usr/bin/python3'
Nov 29 05:47:05 np0005539508.novalocal sudo[7721]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:47:06 np0005539508.novalocal python3[7723]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 05:47:06 np0005539508.novalocal sudo[7721]: pam_unix(sudo:session): session closed for user root
Nov 29 05:47:06 np0005539508.novalocal sudo[7794]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jkyuybibhaeefgpscmcmsskepttaifhs ; /usr/bin/python3'
Nov 29 05:47:06 np0005539508.novalocal sudo[7794]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:47:06 np0005539508.novalocal python3[7796]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764395225.8410692-365-183849364866585/source _original_basename=tmpvi_grj7t follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:47:06 np0005539508.novalocal sudo[7794]: pam_unix(sudo:session): session closed for user root
Nov 29 05:47:07 np0005539508.novalocal sudo[7844]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bocgdooqykqzpjpxbvdgrsbqqntodtgn ; /usr/bin/python3'
Nov 29 05:47:07 np0005539508.novalocal sudo[7844]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:47:07 np0005539508.novalocal python3[7846]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 29 05:47:07 np0005539508.novalocal systemd[1]: Reloading.
Nov 29 05:47:07 np0005539508.novalocal systemd-rc-local-generator[7867]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 05:47:08 np0005539508.novalocal sudo[7844]: pam_unix(sudo:session): session closed for user root
Nov 29 05:47:09 np0005539508.novalocal sudo[7899]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xsofrcoszlaimrpzwxnpfwwfzjmxjodj ; /usr/bin/python3'
Nov 29 05:47:09 np0005539508.novalocal sudo[7899]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:47:09 np0005539508.novalocal python3[7901]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Nov 29 05:47:09 np0005539508.novalocal sudo[7899]: pam_unix(sudo:session): session closed for user root
Nov 29 05:47:09 np0005539508.novalocal sudo[7925]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ordevpmijillkeacyomubqafbpckbrkc ; /usr/bin/python3'
Nov 29 05:47:09 np0005539508.novalocal sudo[7925]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:47:10 np0005539508.novalocal python3[7927]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 05:47:10 np0005539508.novalocal sudo[7925]: pam_unix(sudo:session): session closed for user root
Nov 29 05:47:10 np0005539508.novalocal sudo[7953]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ithzakswwhnmnmyovyqgvjdimrsgxfjp ; /usr/bin/python3'
Nov 29 05:47:10 np0005539508.novalocal sudo[7953]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:47:10 np0005539508.novalocal python3[7955]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 05:47:10 np0005539508.novalocal sudo[7953]: pam_unix(sudo:session): session closed for user root
Nov 29 05:47:10 np0005539508.novalocal sudo[7981]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kxhhuiqxyvpuqrjjhitbhwofgbysgewp ; /usr/bin/python3'
Nov 29 05:47:10 np0005539508.novalocal sudo[7981]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:47:10 np0005539508.novalocal python3[7983]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 05:47:10 np0005539508.novalocal sudo[7981]: pam_unix(sudo:session): session closed for user root
Nov 29 05:47:10 np0005539508.novalocal sudo[8009]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yjwhtrotrtoggpbghljnizydouesaqpr ; /usr/bin/python3'
Nov 29 05:47:10 np0005539508.novalocal sudo[8009]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:47:10 np0005539508.novalocal python3[8011]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 05:47:10 np0005539508.novalocal sudo[8009]: pam_unix(sudo:session): session closed for user root
Nov 29 05:47:11 np0005539508.novalocal python3[8038]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;
                                                       _uses_shell=True zuul_log_id=fa163ef9-e89a-b110-1686-000000000ca9-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 05:47:12 np0005539508.novalocal python3[8068]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 29 05:47:15 np0005539508.novalocal sshd-session[7486]: Connection closed by 38.102.83.114 port 50136
Nov 29 05:47:15 np0005539508.novalocal sshd-session[7483]: pam_unix(sshd:session): session closed for user zuul
Nov 29 05:47:15 np0005539508.novalocal systemd[1]: session-4.scope: Deactivated successfully.
Nov 29 05:47:15 np0005539508.novalocal systemd[1]: session-4.scope: Consumed 4.478s CPU time.
Nov 29 05:47:15 np0005539508.novalocal systemd-logind[797]: Session 4 logged out. Waiting for processes to exit.
Nov 29 05:47:15 np0005539508.novalocal systemd-logind[797]: Removed session 4.
Nov 29 05:47:16 np0005539508.novalocal sshd-session[8073]: Accepted publickey for zuul from 38.102.83.114 port 49744 ssh2: RSA SHA256:MGJJb6X2bjkH8oWT85dgz2a/TwKBbh3/GDOWF3tnPlY
Nov 29 05:47:16 np0005539508.novalocal systemd-logind[797]: New session 5 of user zuul.
Nov 29 05:47:16 np0005539508.novalocal systemd[1]: Started Session 5 of User zuul.
Nov 29 05:47:16 np0005539508.novalocal sshd-session[8073]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 05:47:16 np0005539508.novalocal sudo[8100]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xbdzxmnneydsohdxauitccqvyciibtky ; /usr/bin/python3'
Nov 29 05:47:16 np0005539508.novalocal sudo[8100]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:47:17 np0005539508.novalocal python3[8102]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 29 05:47:32 np0005539508.novalocal kernel: SELinux:  Converting 385 SID table entries...
Nov 29 05:47:32 np0005539508.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Nov 29 05:47:32 np0005539508.novalocal kernel: SELinux:  policy capability open_perms=1
Nov 29 05:47:32 np0005539508.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Nov 29 05:47:32 np0005539508.novalocal kernel: SELinux:  policy capability always_check_network=0
Nov 29 05:47:32 np0005539508.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 29 05:47:32 np0005539508.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 29 05:47:32 np0005539508.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 29 05:47:41 np0005539508.novalocal kernel: SELinux:  Converting 385 SID table entries...
Nov 29 05:47:41 np0005539508.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Nov 29 05:47:41 np0005539508.novalocal kernel: SELinux:  policy capability open_perms=1
Nov 29 05:47:41 np0005539508.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Nov 29 05:47:41 np0005539508.novalocal kernel: SELinux:  policy capability always_check_network=0
Nov 29 05:47:41 np0005539508.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 29 05:47:41 np0005539508.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 29 05:47:41 np0005539508.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 29 05:47:50 np0005539508.novalocal kernel: SELinux:  Converting 385 SID table entries...
Nov 29 05:47:50 np0005539508.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Nov 29 05:47:50 np0005539508.novalocal kernel: SELinux:  policy capability open_perms=1
Nov 29 05:47:50 np0005539508.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Nov 29 05:47:50 np0005539508.novalocal kernel: SELinux:  policy capability always_check_network=0
Nov 29 05:47:50 np0005539508.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 29 05:47:50 np0005539508.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 29 05:47:50 np0005539508.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 29 05:47:51 np0005539508.novalocal setsebool[8170]: The virt_use_nfs policy boolean was changed to 1 by root
Nov 29 05:47:51 np0005539508.novalocal setsebool[8170]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Nov 29 05:48:02 np0005539508.novalocal kernel: SELinux:  Converting 388 SID table entries...
Nov 29 05:48:02 np0005539508.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Nov 29 05:48:02 np0005539508.novalocal kernel: SELinux:  policy capability open_perms=1
Nov 29 05:48:02 np0005539508.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Nov 29 05:48:02 np0005539508.novalocal kernel: SELinux:  policy capability always_check_network=0
Nov 29 05:48:02 np0005539508.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 29 05:48:02 np0005539508.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 29 05:48:02 np0005539508.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 29 05:48:20 np0005539508.novalocal dbus-broker-launch[778]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Nov 29 05:48:20 np0005539508.novalocal systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 29 05:48:20 np0005539508.novalocal systemd[1]: Starting man-db-cache-update.service...
Nov 29 05:48:20 np0005539508.novalocal systemd[1]: Reloading.
Nov 29 05:48:20 np0005539508.novalocal systemd-rc-local-generator[8919]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 05:48:21 np0005539508.novalocal systemd[1]: Queuing reload/restart jobs for marked units…
Nov 29 05:48:22 np0005539508.novalocal sudo[8100]: pam_unix(sudo:session): session closed for user root
Nov 29 05:48:24 np0005539508.novalocal python3[11614]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"
                                                        _uses_shell=True zuul_log_id=fa163ef9-e89a-4d52-d96a-00000000000c-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 05:48:25 np0005539508.novalocal kernel: evm: overlay not supported
Nov 29 05:48:25 np0005539508.novalocal systemd[4304]: Starting D-Bus User Message Bus...
Nov 29 05:48:25 np0005539508.novalocal dbus-broker-launch[12487]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Nov 29 05:48:25 np0005539508.novalocal dbus-broker-launch[12487]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Nov 29 05:48:25 np0005539508.novalocal systemd[4304]: Started D-Bus User Message Bus.
Nov 29 05:48:25 np0005539508.novalocal dbus-broker-lau[12487]: Ready
Nov 29 05:48:25 np0005539508.novalocal systemd[4304]: selinux: avc:  op=load_policy lsm=selinux seqno=6 res=1
Nov 29 05:48:25 np0005539508.novalocal systemd[4304]: Created slice Slice /user.
Nov 29 05:48:25 np0005539508.novalocal systemd[4304]: podman-12317.scope: unit configures an IP firewall, but not running as root.
Nov 29 05:48:25 np0005539508.novalocal systemd[4304]: (This warning is only shown for the first unit using IP firewalling.)
Nov 29 05:48:25 np0005539508.novalocal systemd[4304]: Started podman-12317.scope.
Nov 29 05:48:25 np0005539508.novalocal systemd[4304]: Started podman-pause-6214d594.scope.
Nov 29 05:48:26 np0005539508.novalocal sudo[13051]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-neiwwzrdpimnqenazydyikcxlgwhgbyb ; /usr/bin/python3'
Nov 29 05:48:26 np0005539508.novalocal sudo[13051]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:48:26 np0005539508.novalocal python3[13066]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]
                                                       location = "38.102.83.97:5001"
                                                       insecure = true path=/etc/containers/registries.conf block=[[registry]]
                                                       location = "38.102.83.97:5001"
                                                       insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:48:26 np0005539508.novalocal python3[13066]: ansible-ansible.builtin.blockinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Nov 29 05:48:26 np0005539508.novalocal sudo[13051]: pam_unix(sudo:session): session closed for user root
Nov 29 05:48:26 np0005539508.novalocal sshd-session[8076]: Connection closed by 38.102.83.114 port 49744
Nov 29 05:48:26 np0005539508.novalocal sshd-session[8073]: pam_unix(sshd:session): session closed for user zuul
Nov 29 05:48:26 np0005539508.novalocal systemd[1]: session-5.scope: Deactivated successfully.
Nov 29 05:48:26 np0005539508.novalocal systemd[1]: session-5.scope: Consumed 59.583s CPU time.
Nov 29 05:48:26 np0005539508.novalocal systemd-logind[797]: Session 5 logged out. Waiting for processes to exit.
Nov 29 05:48:26 np0005539508.novalocal systemd-logind[797]: Removed session 5.
Nov 29 05:48:47 np0005539508.novalocal sshd-session[20673]: Connection closed by 38.102.83.107 port 44292 [preauth]
Nov 29 05:48:47 np0005539508.novalocal sshd-session[20675]: Connection closed by 38.102.83.107 port 44278 [preauth]
Nov 29 05:48:47 np0005539508.novalocal sshd-session[20678]: Unable to negotiate with 38.102.83.107 port 44296: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Nov 29 05:48:47 np0005539508.novalocal sshd-session[20680]: Unable to negotiate with 38.102.83.107 port 44302: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Nov 29 05:48:47 np0005539508.novalocal sshd-session[20681]: Unable to negotiate with 38.102.83.107 port 44304: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Nov 29 05:48:52 np0005539508.novalocal sshd-session[22359]: Accepted publickey for zuul from 38.102.83.114 port 39422 ssh2: RSA SHA256:MGJJb6X2bjkH8oWT85dgz2a/TwKBbh3/GDOWF3tnPlY
Nov 29 05:48:52 np0005539508.novalocal systemd-logind[797]: New session 6 of user zuul.
Nov 29 05:48:52 np0005539508.novalocal systemd[1]: Started Session 6 of User zuul.
Nov 29 05:48:52 np0005539508.novalocal sshd-session[22359]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 05:48:52 np0005539508.novalocal python3[22458]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEavs4NswnbtUkOvkddxZOa3c0S0nRNnsg86RQqSndpHonQx0HDlahei607KJa9VEo3VyPPhB6+AdHzrVqMc6KA= zuul@np0005539507.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 05:48:53 np0005539508.novalocal sudo[22651]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jdsjxbqccurhistqfrwvgqfsizhuysvh ; /usr/bin/python3'
Nov 29 05:48:53 np0005539508.novalocal sudo[22651]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:48:53 np0005539508.novalocal python3[22664]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEavs4NswnbtUkOvkddxZOa3c0S0nRNnsg86RQqSndpHonQx0HDlahei607KJa9VEo3VyPPhB6+AdHzrVqMc6KA= zuul@np0005539507.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 05:48:53 np0005539508.novalocal sudo[22651]: pam_unix(sudo:session): session closed for user root
Nov 29 05:48:54 np0005539508.novalocal sudo[22996]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vyjtdqdmidighycgynvsfjuqrvsiatas ; /usr/bin/python3'
Nov 29 05:48:54 np0005539508.novalocal sudo[22996]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:48:54 np0005539508.novalocal python3[22998]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005539508.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Nov 29 05:48:54 np0005539508.novalocal useradd[23029]: new group: name=cloud-admin, GID=1002
Nov 29 05:48:54 np0005539508.novalocal useradd[23029]: new user: name=cloud-admin, UID=1002, GID=1002, home=/home/cloud-admin, shell=/bin/bash, from=none
Nov 29 05:48:54 np0005539508.novalocal sudo[22996]: pam_unix(sudo:session): session closed for user root
Nov 29 05:48:54 np0005539508.novalocal sudo[23172]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wmgglnvqdnaqdxurfxhjyoyuxhszkjlx ; /usr/bin/python3'
Nov 29 05:48:54 np0005539508.novalocal sudo[23172]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:48:54 np0005539508.novalocal python3[23183]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEavs4NswnbtUkOvkddxZOa3c0S0nRNnsg86RQqSndpHonQx0HDlahei607KJa9VEo3VyPPhB6+AdHzrVqMc6KA= zuul@np0005539507.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 05:48:54 np0005539508.novalocal sudo[23172]: pam_unix(sudo:session): session closed for user root
Nov 29 05:48:55 np0005539508.novalocal sudo[23440]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvghixpiysjssvmlcdccilupzxngwryy ; /usr/bin/python3'
Nov 29 05:48:55 np0005539508.novalocal sudo[23440]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:48:55 np0005539508.novalocal python3[23450]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 05:48:55 np0005539508.novalocal sudo[23440]: pam_unix(sudo:session): session closed for user root
Nov 29 05:48:55 np0005539508.novalocal sudo[23726]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pbixluhwxvxkahwbyupbempuhpujmcrp ; /usr/bin/python3'
Nov 29 05:48:55 np0005539508.novalocal sudo[23726]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:48:55 np0005539508.novalocal python3[23733]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1764395335.0861526-167-277432247256758/source _original_basename=tmpb29m9yaq follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:48:55 np0005539508.novalocal sudo[23726]: pam_unix(sudo:session): session closed for user root
Nov 29 05:48:56 np0005539508.novalocal sudo[24039]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wpktdscbnpxxpaypmnbcukbvacyhmvxp ; /usr/bin/python3'
Nov 29 05:48:56 np0005539508.novalocal sudo[24039]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:48:56 np0005539508.novalocal python3[24050]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Nov 29 05:48:56 np0005539508.novalocal systemd[1]: Starting Hostname Service...
Nov 29 05:48:56 np0005539508.novalocal systemd[1]: Started Hostname Service.
Nov 29 05:48:56 np0005539508.novalocal systemd-hostnamed[24126]: Changed pretty hostname to 'compute-0'
Nov 29 05:48:56 compute-0 systemd-hostnamed[24126]: Hostname set to <compute-0> (static)
Nov 29 05:48:56 compute-0 NetworkManager[7189]: <info>  [1764395336.9762] hostname: static hostname changed from "np0005539508.novalocal" to "compute-0"
Nov 29 05:48:56 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 29 05:48:57 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 29 05:48:57 compute-0 sudo[24039]: pam_unix(sudo:session): session closed for user root
Nov 29 05:48:57 compute-0 sshd-session[22402]: Connection closed by 38.102.83.114 port 39422
Nov 29 05:48:57 compute-0 sshd-session[22359]: pam_unix(sshd:session): session closed for user zuul
Nov 29 05:48:57 compute-0 systemd[1]: session-6.scope: Deactivated successfully.
Nov 29 05:48:57 compute-0 systemd[1]: session-6.scope: Consumed 2.620s CPU time.
Nov 29 05:48:57 compute-0 systemd-logind[797]: Session 6 logged out. Waiting for processes to exit.
Nov 29 05:48:57 compute-0 systemd-logind[797]: Removed session 6.
Nov 29 05:49:07 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 29 05:49:11 compute-0 sshd-session[28554]: Received disconnect from 80.94.93.119 port 40736:11:  [preauth]
Nov 29 05:49:11 compute-0 sshd-session[28554]: Disconnected from authenticating user root 80.94.93.119 port 40736 [preauth]
Nov 29 05:49:15 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 29 05:49:15 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 29 05:49:15 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1min 5.819s CPU time.
Nov 29 05:49:15 compute-0 systemd[1]: run-r6d4e92f2203343d8b7a3b79be9bea0c0.service: Deactivated successfully.
Nov 29 05:49:27 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 29 05:51:18 compute-0 systemd[1]: Starting Cleanup of Temporary Directories...
Nov 29 05:51:18 compute-0 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Nov 29 05:51:18 compute-0 systemd[1]: Finished Cleanup of Temporary Directories.
Nov 29 05:51:18 compute-0 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Nov 29 05:52:16 compute-0 sshd-session[29926]: error: kex_exchange_identification: read: Connection reset by peer
Nov 29 05:52:16 compute-0 sshd-session[29926]: Connection reset by 45.140.17.97 port 58231
Nov 29 05:53:01 compute-0 sshd-session[29928]: Accepted publickey for zuul from 38.102.83.107 port 39510 ssh2: RSA SHA256:MGJJb6X2bjkH8oWT85dgz2a/TwKBbh3/GDOWF3tnPlY
Nov 29 05:53:01 compute-0 systemd-logind[797]: New session 7 of user zuul.
Nov 29 05:53:01 compute-0 systemd[1]: Started Session 7 of User zuul.
Nov 29 05:53:01 compute-0 sshd-session[29928]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 05:53:02 compute-0 python3[30004]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 05:53:04 compute-0 sudo[30118]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zsotzuphmlkhsrzovzfyuywumxafhqfr ; /usr/bin/python3'
Nov 29 05:53:04 compute-0 sudo[30118]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:53:04 compute-0 python3[30120]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 05:53:04 compute-0 sudo[30118]: pam_unix(sudo:session): session closed for user root
Nov 29 05:53:04 compute-0 sudo[30191]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xqeqqmwnoljunrhckjymhthznhdzaxxf ; /usr/bin/python3'
Nov 29 05:53:04 compute-0 sudo[30191]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:53:04 compute-0 python3[30193]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764395583.8706179-34045-54332860422931/source mode=0755 _original_basename=delorean.repo follow=False checksum=a16f090252000d02a7f7d540bb10f7c1c9cd4ac5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:53:04 compute-0 sudo[30191]: pam_unix(sudo:session): session closed for user root
Nov 29 05:53:04 compute-0 sudo[30217]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfidpytfmilhwiqjrnngchupfderdvfz ; /usr/bin/python3'
Nov 29 05:53:04 compute-0 sudo[30217]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:53:05 compute-0 python3[30219]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 05:53:05 compute-0 sudo[30217]: pam_unix(sudo:session): session closed for user root
Nov 29 05:53:05 compute-0 sudo[30290]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nblyofmmogbsxlhefmknsuogufqpaxoy ; /usr/bin/python3'
Nov 29 05:53:05 compute-0 sudo[30290]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:53:05 compute-0 python3[30292]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764395583.8706179-34045-54332860422931/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=0bdbb813b840548359ae77c28d76ca272ccaf31b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:53:05 compute-0 sudo[30290]: pam_unix(sudo:session): session closed for user root
Nov 29 05:53:05 compute-0 sudo[30316]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ttonswacpwwxjlctfqxmldampfkhkwch ; /usr/bin/python3'
Nov 29 05:53:05 compute-0 sudo[30316]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:53:05 compute-0 python3[30318]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 05:53:05 compute-0 sudo[30316]: pam_unix(sudo:session): session closed for user root
Nov 29 05:53:06 compute-0 sudo[30389]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sdlgbwvoqcwvepzhgicydcjdgcbxykvh ; /usr/bin/python3'
Nov 29 05:53:06 compute-0 sudo[30389]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:53:06 compute-0 python3[30391]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764395583.8706179-34045-54332860422931/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:53:06 compute-0 sudo[30389]: pam_unix(sudo:session): session closed for user root
Nov 29 05:53:06 compute-0 sudo[30415]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ewqkxjiyutovqwwekrcyctqdqcpclmfz ; /usr/bin/python3'
Nov 29 05:53:06 compute-0 sudo[30415]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:53:06 compute-0 python3[30417]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 05:53:06 compute-0 sudo[30415]: pam_unix(sudo:session): session closed for user root
Nov 29 05:53:06 compute-0 sudo[30488]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vltvsztbcexaowkguevnouruzeqepbos ; /usr/bin/python3'
Nov 29 05:53:06 compute-0 sudo[30488]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:53:07 compute-0 python3[30490]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764395583.8706179-34045-54332860422931/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:53:07 compute-0 sudo[30488]: pam_unix(sudo:session): session closed for user root
Nov 29 05:53:07 compute-0 sudo[30514]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gaeznidnzwgepjbqtfxrungohrsgcxwx ; /usr/bin/python3'
Nov 29 05:53:07 compute-0 sudo[30514]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:53:07 compute-0 python3[30516]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 05:53:07 compute-0 sudo[30514]: pam_unix(sudo:session): session closed for user root
Nov 29 05:53:07 compute-0 sudo[30587]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cxvkzaghziubugiycjxnguroerqrwrrn ; /usr/bin/python3'
Nov 29 05:53:07 compute-0 sudo[30587]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:53:07 compute-0 python3[30589]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764395583.8706179-34045-54332860422931/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:53:07 compute-0 sudo[30587]: pam_unix(sudo:session): session closed for user root
Nov 29 05:53:07 compute-0 sudo[30613]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yhnulimpyvxivxnnbebjebfsfoylcjwn ; /usr/bin/python3'
Nov 29 05:53:07 compute-0 sudo[30613]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:53:07 compute-0 python3[30615]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 05:53:07 compute-0 sudo[30613]: pam_unix(sudo:session): session closed for user root
Nov 29 05:53:08 compute-0 sudo[30686]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qimvknrcxnlhohrxopwyhkwsykfcecps ; /usr/bin/python3'
Nov 29 05:53:08 compute-0 sudo[30686]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:53:08 compute-0 python3[30688]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764395583.8706179-34045-54332860422931/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:53:08 compute-0 sudo[30686]: pam_unix(sudo:session): session closed for user root
Nov 29 05:53:08 compute-0 sudo[30712]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mpfgixafhcftvjgqnenxewvitsgcotwx ; /usr/bin/python3'
Nov 29 05:53:08 compute-0 sudo[30712]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:53:08 compute-0 python3[30714]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 05:53:08 compute-0 sudo[30712]: pam_unix(sudo:session): session closed for user root
Nov 29 05:53:08 compute-0 sudo[30785]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-avqaqdxuhiqklircsebxwnqmcqbhxium ; /usr/bin/python3'
Nov 29 05:53:08 compute-0 sudo[30785]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:53:09 compute-0 python3[30787]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764395583.8706179-34045-54332860422931/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=25e801a9a05537c191e2aa500f19076ac31d3e5b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:53:09 compute-0 sudo[30785]: pam_unix(sudo:session): session closed for user root
Nov 29 05:53:11 compute-0 sshd-session[30812]: Unable to negotiate with 192.168.122.11 port 35594: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Nov 29 05:53:11 compute-0 sshd-session[30813]: Unable to negotiate with 192.168.122.11 port 35606: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Nov 29 05:53:11 compute-0 sshd-session[30815]: Connection closed by 192.168.122.11 port 35588 [preauth]
Nov 29 05:53:11 compute-0 sshd-session[30814]: Connection closed by 192.168.122.11 port 35590 [preauth]
Nov 29 05:53:11 compute-0 sshd-session[30817]: Unable to negotiate with 192.168.122.11 port 35602: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Nov 29 05:53:20 compute-0 python3[30845]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 05:57:39 compute-0 sshd-session[30849]: Received disconnect from 193.46.255.217 port 44618:11:  [preauth]
Nov 29 05:57:39 compute-0 sshd-session[30849]: Disconnected from authenticating user root 193.46.255.217 port 44618 [preauth]
Nov 29 05:57:49 compute-0 sshd-session[30851]: Invalid user wordpress from 193.32.162.157 port 36582
Nov 29 05:57:51 compute-0 sshd-session[30851]: Connection closed by invalid user wordpress 193.32.162.157 port 36582 [preauth]
Nov 29 05:58:03 compute-0 sshd-session[30853]: Connection closed by authenticating user root 193.32.162.157 port 49684 [preauth]
Nov 29 05:58:14 compute-0 sshd-session[30855]: Connection closed by authenticating user root 193.32.162.157 port 50488 [preauth]
Nov 29 05:58:16 compute-0 sshd-session[30858]: Invalid user ubuntu from 31.6.212.12 port 35118
Nov 29 05:58:16 compute-0 sshd-session[30858]: Received disconnect from 31.6.212.12 port 35118:11: Bye Bye [preauth]
Nov 29 05:58:16 compute-0 sshd-session[30858]: Disconnected from invalid user ubuntu 31.6.212.12 port 35118 [preauth]
Nov 29 05:58:19 compute-0 sshd-session[29931]: Received disconnect from 38.102.83.107 port 39510:11: disconnected by user
Nov 29 05:58:19 compute-0 sshd-session[29931]: Disconnected from user zuul 38.102.83.107 port 39510
Nov 29 05:58:19 compute-0 sshd-session[29928]: pam_unix(sshd:session): session closed for user zuul
Nov 29 05:58:20 compute-0 systemd[1]: session-7.scope: Deactivated successfully.
Nov 29 05:58:20 compute-0 systemd[1]: session-7.scope: Consumed 5.894s CPU time.
Nov 29 05:58:20 compute-0 systemd-logind[797]: Session 7 logged out. Waiting for processes to exit.
Nov 29 05:58:20 compute-0 systemd-logind[797]: Removed session 7.
Nov 29 05:58:26 compute-0 sshd-session[30857]: Connection closed by authenticating user root 193.32.162.157 port 37718 [preauth]
Nov 29 05:58:38 compute-0 sshd-session[30861]: Connection closed by authenticating user root 193.32.162.157 port 36416 [preauth]
Nov 29 05:58:49 compute-0 sshd-session[30863]: Connection closed by authenticating user root 193.32.162.157 port 50940 [preauth]
Nov 29 05:59:00 compute-0 sshd-session[30865]: Connection closed by authenticating user root 193.32.162.157 port 48742 [preauth]
Nov 29 05:59:12 compute-0 sshd-session[30867]: Connection closed by authenticating user root 193.32.162.157 port 44958 [preauth]
Nov 29 05:59:24 compute-0 sshd-session[30869]: Connection closed by authenticating user root 193.32.162.157 port 39112 [preauth]
Nov 29 05:59:34 compute-0 sshd-session[30874]: Invalid user ubuntu from 104.208.108.166 port 36884
Nov 29 05:59:34 compute-0 sshd-session[30874]: Received disconnect from 104.208.108.166 port 36884:11: Bye Bye [preauth]
Nov 29 05:59:34 compute-0 sshd-session[30874]: Disconnected from invalid user ubuntu 104.208.108.166 port 36884 [preauth]
Nov 29 05:59:35 compute-0 sshd-session[30871]: Connection closed by authenticating user root 193.32.162.157 port 60078 [preauth]
Nov 29 05:59:47 compute-0 sshd-session[30876]: Connection closed by authenticating user root 193.32.162.157 port 60760 [preauth]
Nov 29 05:59:52 compute-0 sshd-session[30880]: Invalid user admin from 79.116.35.29 port 47256
Nov 29 05:59:52 compute-0 sshd-session[30880]: Received disconnect from 79.116.35.29 port 47256:11: Bye Bye [preauth]
Nov 29 05:59:52 compute-0 sshd-session[30880]: Disconnected from invalid user admin 79.116.35.29 port 47256 [preauth]
Nov 29 05:59:59 compute-0 sshd-session[30878]: Connection closed by authenticating user root 193.32.162.157 port 58106 [preauth]
Nov 29 06:00:10 compute-0 sshd-session[30882]: Connection closed by authenticating user root 193.32.162.157 port 48790 [preauth]
Nov 29 06:00:15 compute-0 sshd-session[30887]: Invalid user ubuntu from 138.124.186.225 port 51094
Nov 29 06:00:16 compute-0 sshd-session[30887]: Received disconnect from 138.124.186.225 port 51094:11: Bye Bye [preauth]
Nov 29 06:00:16 compute-0 sshd-session[30887]: Disconnected from invalid user ubuntu 138.124.186.225 port 51094 [preauth]
Nov 29 06:00:22 compute-0 sshd-session[30885]: Connection closed by authenticating user root 193.32.162.157 port 49616 [preauth]
Nov 29 06:00:31 compute-0 sshd-session[30889]: Invalid user telegram from 193.32.162.157 port 42876
Nov 29 06:00:34 compute-0 sshd-session[30889]: Connection closed by invalid user telegram 193.32.162.157 port 42876 [preauth]
Nov 29 06:00:45 compute-0 sshd-session[30891]: Connection closed by authenticating user root 193.32.162.157 port 40858 [preauth]
Nov 29 06:00:51 compute-0 sshd-session[30897]: Invalid user gits from 104.208.108.166 port 55374
Nov 29 06:00:52 compute-0 sshd-session[30897]: Received disconnect from 104.208.108.166 port 55374:11: Bye Bye [preauth]
Nov 29 06:00:52 compute-0 sshd-session[30897]: Disconnected from invalid user gits 104.208.108.166 port 55374 [preauth]
Nov 29 06:00:56 compute-0 sshd-session[30895]: Received disconnect from 115.190.37.201 port 38772:11: Bye Bye [preauth]
Nov 29 06:00:56 compute-0 sshd-session[30895]: Disconnected from authenticating user root 115.190.37.201 port 38772 [preauth]
Nov 29 06:00:57 compute-0 sshd-session[30893]: Connection closed by authenticating user root 193.32.162.157 port 57118 [preauth]
Nov 29 06:01:01 compute-0 CROND[30902]: (root) CMD (run-parts /etc/cron.hourly)
Nov 29 06:01:01 compute-0 run-parts[30905]: (/etc/cron.hourly) starting 0anacron
Nov 29 06:01:01 compute-0 anacron[30913]: Anacron started on 2025-11-29
Nov 29 06:01:01 compute-0 anacron[30913]: Will run job `cron.daily' in 23 min.
Nov 29 06:01:01 compute-0 anacron[30913]: Will run job `cron.weekly' in 43 min.
Nov 29 06:01:01 compute-0 anacron[30913]: Will run job `cron.monthly' in 63 min.
Nov 29 06:01:01 compute-0 anacron[30913]: Jobs will be executed sequentially
Nov 29 06:01:01 compute-0 run-parts[30915]: (/etc/cron.hourly) finished 0anacron
Nov 29 06:01:01 compute-0 CROND[30901]: (root) CMDEND (run-parts /etc/cron.hourly)
Nov 29 06:01:09 compute-0 sshd-session[30899]: Connection closed by authenticating user root 193.32.162.157 port 33936 [preauth]
Nov 29 06:01:21 compute-0 sshd-session[30916]: Connection closed by authenticating user root 193.32.162.157 port 46380 [preauth]
Nov 29 06:01:29 compute-0 sshd-session[30921]: Invalid user deployer from 31.6.212.12 port 58694
Nov 29 06:01:30 compute-0 sshd-session[30921]: Received disconnect from 31.6.212.12 port 58694:11: Bye Bye [preauth]
Nov 29 06:01:30 compute-0 sshd-session[30921]: Disconnected from invalid user deployer 31.6.212.12 port 58694 [preauth]
Nov 29 06:01:32 compute-0 sshd-session[30918]: Connection closed by authenticating user root 193.32.162.157 port 35178 [preauth]
Nov 29 06:01:42 compute-0 sshd-session[30925]: Received disconnect from 103.147.159.91 port 50984:11: Bye Bye [preauth]
Nov 29 06:01:42 compute-0 sshd-session[30925]: Disconnected from authenticating user root 103.147.159.91 port 50984 [preauth]
Nov 29 06:01:44 compute-0 sshd-session[30923]: Connection closed by authenticating user root 193.32.162.157 port 58462 [preauth]
Nov 29 06:01:53 compute-0 sshd-session[30927]: Invalid user git from 193.32.162.157 port 44198
Nov 29 06:01:55 compute-0 sshd-session[30927]: Connection closed by invalid user git 193.32.162.157 port 44198 [preauth]
Nov 29 06:02:04 compute-0 sshd-session[30931]: Received disconnect from 104.208.108.166 port 43652:11: Bye Bye [preauth]
Nov 29 06:02:04 compute-0 sshd-session[30931]: Disconnected from authenticating user root 104.208.108.166 port 43652 [preauth]
Nov 29 06:02:07 compute-0 sshd-session[30929]: Connection closed by authenticating user root 193.32.162.157 port 53332 [preauth]
Nov 29 06:02:18 compute-0 sshd-session[30933]: Connection closed by authenticating user root 193.32.162.157 port 60162 [preauth]
Nov 29 06:02:22 compute-0 sshd-session[30937]: Received disconnect from 79.116.35.29 port 32980:11: Bye Bye [preauth]
Nov 29 06:02:22 compute-0 sshd-session[30937]: Disconnected from authenticating user root 79.116.35.29 port 32980 [preauth]
Nov 29 06:02:28 compute-0 sshd-session[30935]: Invalid user fuuto from 193.32.162.157 port 44070
Nov 29 06:02:30 compute-0 sshd-session[30935]: Connection closed by invalid user fuuto 193.32.162.157 port 44070 [preauth]
Nov 29 06:02:30 compute-0 sshd-session[30939]: Invalid user marvin from 138.124.186.225 port 51250
Nov 29 06:02:30 compute-0 sshd-session[30939]: Received disconnect from 138.124.186.225 port 51250:11: Bye Bye [preauth]
Nov 29 06:02:30 compute-0 sshd-session[30939]: Disconnected from invalid user marvin 138.124.186.225 port 51250 [preauth]
Nov 29 06:02:41 compute-0 sshd-session[30941]: Connection closed by authenticating user root 193.32.162.157 port 43700 [preauth]
Nov 29 06:02:46 compute-0 sshd-session[30945]: Invalid user mysql from 31.6.212.12 port 33064
Nov 29 06:02:46 compute-0 sshd-session[30945]: Received disconnect from 31.6.212.12 port 33064:11: Bye Bye [preauth]
Nov 29 06:02:46 compute-0 sshd-session[30945]: Disconnected from invalid user mysql 31.6.212.12 port 33064 [preauth]
Nov 29 06:02:53 compute-0 sshd-session[30943]: Connection closed by authenticating user root 193.32.162.157 port 35546 [preauth]
Nov 29 06:03:02 compute-0 sshd-session[30947]: Invalid user admin from 193.32.162.157 port 48730
Nov 29 06:03:04 compute-0 sshd-session[30947]: Connection closed by invalid user admin 193.32.162.157 port 48730 [preauth]
Nov 29 06:03:13 compute-0 sshd-session[30949]: Invalid user dan from 193.32.162.157 port 43746
Nov 29 06:03:16 compute-0 sshd-session[30949]: Connection closed by invalid user dan 193.32.162.157 port 43746 [preauth]
Nov 29 06:03:18 compute-0 sshd-session[30952]: Invalid user usuario1 from 104.208.108.166 port 64646
Nov 29 06:03:18 compute-0 sshd-session[30952]: Received disconnect from 104.208.108.166 port 64646:11: Bye Bye [preauth]
Nov 29 06:03:18 compute-0 sshd-session[30952]: Disconnected from invalid user usuario1 104.208.108.166 port 64646 [preauth]
Nov 29 06:03:27 compute-0 sshd-session[30951]: Connection closed by authenticating user root 193.32.162.157 port 42476 [preauth]
Nov 29 06:03:38 compute-0 sshd-session[30956]: Connection closed by authenticating user root 193.32.162.157 port 37110 [preauth]
Nov 29 06:03:39 compute-0 sshd-session[30958]: Invalid user localhost from 138.124.186.225 port 40826
Nov 29 06:03:39 compute-0 sshd-session[30958]: Received disconnect from 138.124.186.225 port 40826:11: Bye Bye [preauth]
Nov 29 06:03:39 compute-0 sshd-session[30958]: Disconnected from invalid user localhost 138.124.186.225 port 40826 [preauth]
Nov 29 06:03:45 compute-0 sshd-session[30962]: Invalid user exx from 79.116.35.29 port 60548
Nov 29 06:03:45 compute-0 sshd-session[30962]: Received disconnect from 79.116.35.29 port 60548:11: Bye Bye [preauth]
Nov 29 06:03:45 compute-0 sshd-session[30962]: Disconnected from invalid user exx 79.116.35.29 port 60548 [preauth]
Nov 29 06:03:48 compute-0 sshd-session[30964]: Invalid user ftpadmin from 103.147.159.91 port 51120
Nov 29 06:03:49 compute-0 sshd-session[30964]: Received disconnect from 103.147.159.91 port 51120:11: Bye Bye [preauth]
Nov 29 06:03:49 compute-0 sshd-session[30964]: Disconnected from invalid user ftpadmin 103.147.159.91 port 51120 [preauth]
Nov 29 06:03:50 compute-0 sshd-session[30960]: Connection closed by authenticating user root 193.32.162.157 port 47762 [preauth]
Nov 29 06:04:01 compute-0 sshd-session[30966]: Connection closed by authenticating user root 193.32.162.157 port 33300 [preauth]
Nov 29 06:04:03 compute-0 sshd-session[30969]: Received disconnect from 31.6.212.12 port 55590:11: Bye Bye [preauth]
Nov 29 06:04:03 compute-0 sshd-session[30969]: Disconnected from authenticating user root 31.6.212.12 port 55590 [preauth]
Nov 29 06:04:11 compute-0 sshd-session[30972]: Received disconnect from 193.46.255.244 port 40108:11:  [preauth]
Nov 29 06:04:11 compute-0 sshd-session[30972]: Disconnected from authenticating user root 193.46.255.244 port 40108 [preauth]
Nov 29 06:04:13 compute-0 sshd-session[30968]: Connection closed by authenticating user root 193.32.162.157 port 37762 [preauth]
Nov 29 06:04:24 compute-0 sshd-session[30974]: Connection closed by authenticating user root 193.32.162.157 port 55942 [preauth]
Nov 29 06:04:32 compute-0 sshd-session[30978]: Invalid user bodega from 104.208.108.166 port 20070
Nov 29 06:04:32 compute-0 sshd-session[30978]: Received disconnect from 104.208.108.166 port 20070:11: Bye Bye [preauth]
Nov 29 06:04:32 compute-0 sshd-session[30978]: Disconnected from invalid user bodega 104.208.108.166 port 20070 [preauth]
Nov 29 06:04:35 compute-0 sshd-session[30976]: Connection closed by authenticating user root 193.32.162.157 port 33982 [preauth]
Nov 29 06:04:40 compute-0 sshd-session[30981]: Invalid user root1 from 115.190.37.201 port 50382
Nov 29 06:04:40 compute-0 sshd-session[30981]: Received disconnect from 115.190.37.201 port 50382:11: Bye Bye [preauth]
Nov 29 06:04:40 compute-0 sshd-session[30981]: Disconnected from invalid user root1 115.190.37.201 port 50382 [preauth]
Nov 29 06:04:43 compute-0 sshd-session[30984]: Invalid user hadoop from 138.124.186.225 port 46824
Nov 29 06:04:43 compute-0 sshd-session[30984]: Received disconnect from 138.124.186.225 port 46824:11: Bye Bye [preauth]
Nov 29 06:04:43 compute-0 sshd-session[30984]: Disconnected from invalid user hadoop 138.124.186.225 port 46824 [preauth]
Nov 29 06:04:45 compute-0 sshd-session[30980]: Invalid user oracle from 193.32.162.157 port 57140
Nov 29 06:04:47 compute-0 sshd-session[30980]: Connection closed by invalid user oracle 193.32.162.157 port 57140 [preauth]
Nov 29 06:04:54 compute-0 sshd-session[30989]: Received disconnect from 79.116.35.29 port 59864:11: Bye Bye [preauth]
Nov 29 06:04:54 compute-0 sshd-session[30989]: Disconnected from authenticating user root 79.116.35.29 port 59864 [preauth]
Nov 29 06:04:58 compute-0 sshd-session[30987]: Connection closed by authenticating user root 193.32.162.157 port 38602 [preauth]
Nov 29 06:05:10 compute-0 sshd-session[30991]: Connection closed by authenticating user root 193.32.162.157 port 37722 [preauth]
Nov 29 06:05:17 compute-0 sshd-session[30995]: Invalid user javad from 31.6.212.12 port 44274
Nov 29 06:05:17 compute-0 sshd-session[30995]: Received disconnect from 31.6.212.12 port 44274:11: Bye Bye [preauth]
Nov 29 06:05:17 compute-0 sshd-session[30995]: Disconnected from invalid user javad 31.6.212.12 port 44274 [preauth]
Nov 29 06:05:20 compute-0 sshd[1008]: Timeout before authentication for connection from 101.126.81.18 to 38.102.83.22, pid = 30954
Nov 29 06:05:20 compute-0 sshd-session[30997]: Invalid user tester from 103.147.159.91 port 51238
Nov 29 06:05:20 compute-0 sshd-session[30997]: Received disconnect from 103.147.159.91 port 51238:11: Bye Bye [preauth]
Nov 29 06:05:20 compute-0 sshd-session[30997]: Disconnected from invalid user tester 103.147.159.91 port 51238 [preauth]
Nov 29 06:05:22 compute-0 sshd-session[30993]: Connection closed by authenticating user root 193.32.162.157 port 37180 [preauth]
Nov 29 06:05:31 compute-0 sshd-session[30999]: Invalid user deployer from 193.32.162.157 port 40568
Nov 29 06:05:33 compute-0 sshd-session[30999]: Connection closed by invalid user deployer 193.32.162.157 port 40568 [preauth]
Nov 29 06:05:44 compute-0 sshd-session[31004]: Invalid user admin from 138.124.186.225 port 50730
Nov 29 06:05:44 compute-0 sshd-session[31004]: Received disconnect from 138.124.186.225 port 50730:11: Bye Bye [preauth]
Nov 29 06:05:44 compute-0 sshd-session[31004]: Disconnected from invalid user admin 138.124.186.225 port 50730 [preauth]
Nov 29 06:05:45 compute-0 sshd-session[31001]: Connection closed by authenticating user root 193.32.162.157 port 44278 [preauth]
Nov 29 06:05:47 compute-0 sshd-session[31007]: Invalid user root1 from 104.208.108.166 port 18542
Nov 29 06:05:47 compute-0 sshd-session[31007]: Received disconnect from 104.208.108.166 port 18542:11: Bye Bye [preauth]
Nov 29 06:05:47 compute-0 sshd-session[31007]: Disconnected from invalid user root1 104.208.108.166 port 18542 [preauth]
Nov 29 06:05:49 compute-0 sshd-session[31010]: Received disconnect from 115.190.37.201 port 32966:11: Bye Bye [preauth]
Nov 29 06:05:49 compute-0 sshd-session[31010]: Disconnected from authenticating user root 115.190.37.201 port 32966 [preauth]
Nov 29 06:05:56 compute-0 sshd-session[31006]: Connection closed by authenticating user root 193.32.162.157 port 59006 [preauth]
Nov 29 06:05:59 compute-0 sshd-session[31013]: Invalid user localhost from 79.116.35.29 port 59184
Nov 29 06:05:59 compute-0 sshd-session[31013]: Received disconnect from 79.116.35.29 port 59184:11: Bye Bye [preauth]
Nov 29 06:05:59 compute-0 sshd-session[31013]: Disconnected from invalid user localhost 79.116.35.29 port 59184 [preauth]
Nov 29 06:06:08 compute-0 sshd-session[31012]: Connection closed by authenticating user root 193.32.162.157 port 55100 [preauth]
Nov 29 06:06:19 compute-0 sshd-session[31016]: Connection closed by authenticating user root 193.32.162.157 port 34810 [preauth]
Nov 29 06:06:26 compute-0 sshd-session[31021]: Accepted publickey for zuul from 192.168.122.30 port 58958 ssh2: ECDSA SHA256:q0RMlXdalxA6snNWza7TmIndlwLWLLpO+sXhiGKqO/I
Nov 29 06:06:27 compute-0 systemd-logind[797]: New session 8 of user zuul.
Nov 29 06:06:27 compute-0 systemd[1]: Started Session 8 of User zuul.
Nov 29 06:06:27 compute-0 sshd-session[31021]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 06:06:28 compute-0 python3.9[31174]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 06:06:29 compute-0 sudo[31353]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-utjaodmxciuwkcdlypmsrvwiciuiuosr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396388.8545878-61-1534203587860/AnsiballZ_command.py'
Nov 29 06:06:29 compute-0 sudo[31353]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:06:29 compute-0 python3.9[31355]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                            pushd /var/tmp
                                            curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                            pushd repo-setup-main
                                            python3 -m venv ./venv
                                            PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                            ./venv/bin/repo-setup current-podified -b antelope
                                            popd
                                            rm -rf repo-setup-main
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:06:31 compute-0 sshd-session[31019]: Connection closed by authenticating user root 193.32.162.157 port 47292 [preauth]
Nov 29 06:06:32 compute-0 sshd-session[31369]: Received disconnect from 31.6.212.12 port 52980:11: Bye Bye [preauth]
Nov 29 06:06:32 compute-0 sshd-session[31369]: Disconnected from authenticating user root 31.6.212.12 port 52980 [preauth]
Nov 29 06:06:36 compute-0 sudo[31353]: pam_unix(sudo:session): session closed for user root
Nov 29 06:06:38 compute-0 sshd-session[31024]: Connection closed by 192.168.122.30 port 58958
Nov 29 06:06:38 compute-0 sshd-session[31021]: pam_unix(sshd:session): session closed for user zuul
Nov 29 06:06:38 compute-0 systemd[1]: session-8.scope: Deactivated successfully.
Nov 29 06:06:38 compute-0 systemd[1]: session-8.scope: Consumed 7.747s CPU time.
Nov 29 06:06:38 compute-0 systemd-logind[797]: Session 8 logged out. Waiting for processes to exit.
Nov 29 06:06:38 compute-0 systemd-logind[797]: Removed session 8.
Nov 29 06:06:42 compute-0 sshd-session[31368]: Connection closed by authenticating user root 193.32.162.157 port 57140 [preauth]
Nov 29 06:06:43 compute-0 sshd-session[31416]: Received disconnect from 138.124.186.225 port 53156:11: Bye Bye [preauth]
Nov 29 06:06:43 compute-0 sshd-session[31416]: Disconnected from authenticating user root 138.124.186.225 port 53156 [preauth]
Nov 29 06:06:44 compute-0 sshd-session[31419]: Invalid user gitea from 103.147.159.91 port 51360
Nov 29 06:06:45 compute-0 sshd-session[31419]: Received disconnect from 103.147.159.91 port 51360:11: Bye Bye [preauth]
Nov 29 06:06:45 compute-0 sshd-session[31419]: Disconnected from invalid user gitea 103.147.159.91 port 51360 [preauth]
Nov 29 06:06:54 compute-0 sshd-session[31422]: Accepted publickey for zuul from 192.168.122.30 port 37420 ssh2: ECDSA SHA256:q0RMlXdalxA6snNWza7TmIndlwLWLLpO+sXhiGKqO/I
Nov 29 06:06:54 compute-0 systemd-logind[797]: New session 9 of user zuul.
Nov 29 06:06:54 compute-0 systemd[1]: Started Session 9 of User zuul.
Nov 29 06:06:54 compute-0 sshd-session[31422]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 06:06:54 compute-0 sshd-session[31418]: Connection closed by authenticating user root 193.32.162.157 port 50274 [preauth]
Nov 29 06:06:55 compute-0 python3.9[31576]: ansible-ansible.legacy.ping Invoked with data=pong
Nov 29 06:06:56 compute-0 sshd-session[31584]: Invalid user radarr from 104.208.108.166 port 48218
Nov 29 06:06:56 compute-0 python3.9[31753]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 06:06:56 compute-0 sshd-session[31584]: Received disconnect from 104.208.108.166 port 48218:11: Bye Bye [preauth]
Nov 29 06:06:56 compute-0 sshd-session[31584]: Disconnected from invalid user radarr 104.208.108.166 port 48218 [preauth]
Nov 29 06:06:57 compute-0 sudo[31903]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tbcmeqwiwlinasysldegmysmcoaxeegd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396417.1925669-98-176811113995725/AnsiballZ_command.py'
Nov 29 06:06:57 compute-0 sudo[31903]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:06:57 compute-0 python3.9[31905]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:06:57 compute-0 sudo[31903]: pam_unix(sudo:session): session closed for user root
Nov 29 06:06:59 compute-0 sudo[32056]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vgqksyovafltjchkdqqyquyqnwahqqfq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396418.5141218-134-217546855659477/AnsiballZ_stat.py'
Nov 29 06:06:59 compute-0 sudo[32056]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:06:59 compute-0 python3.9[32058]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 06:06:59 compute-0 sudo[32056]: pam_unix(sudo:session): session closed for user root
Nov 29 06:07:00 compute-0 sudo[32209]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-galmtsdkttkaltjowlidxrggqorvwjor ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396419.4669957-158-130403162883232/AnsiballZ_file.py'
Nov 29 06:07:00 compute-0 sudo[32209]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:07:00 compute-0 python3.9[32211]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:07:00 compute-0 sudo[32209]: pam_unix(sudo:session): session closed for user root
Nov 29 06:07:01 compute-0 sudo[32361]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qmbuubfvccnuocubbednxteeswjlqpje ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396420.8603268-182-115163283736205/AnsiballZ_stat.py'
Nov 29 06:07:01 compute-0 sudo[32361]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:07:01 compute-0 python3.9[32363]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:07:01 compute-0 sudo[32361]: pam_unix(sudo:session): session closed for user root
Nov 29 06:07:02 compute-0 sudo[32484]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ysqmjqzhcqauqwyuyhkxvrddovckvpsw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396420.8603268-182-115163283736205/AnsiballZ_copy.py'
Nov 29 06:07:02 compute-0 sudo[32484]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:07:02 compute-0 python3.9[32486]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1764396420.8603268-182-115163283736205/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:07:02 compute-0 sudo[32484]: pam_unix(sudo:session): session closed for user root
Nov 29 06:07:02 compute-0 sshd-session[32487]: Invalid user javad from 79.116.35.29 port 58502
Nov 29 06:07:02 compute-0 sudo[32638]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iktxviqxetmyedyjvbunjdmirhcljirv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396422.4497247-227-272235920481352/AnsiballZ_setup.py'
Nov 29 06:07:02 compute-0 sudo[32638]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:07:02 compute-0 sshd-session[32487]: Received disconnect from 79.116.35.29 port 58502:11: Bye Bye [preauth]
Nov 29 06:07:02 compute-0 sshd-session[32487]: Disconnected from invalid user javad 79.116.35.29 port 58502 [preauth]
Nov 29 06:07:03 compute-0 python3.9[32640]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 06:07:03 compute-0 sshd-session[31449]: Invalid user test from 193.32.162.157 port 59218
Nov 29 06:07:03 compute-0 sudo[32638]: pam_unix(sudo:session): session closed for user root
Nov 29 06:07:03 compute-0 sudo[32794]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rwodenjfxytdsmqdbphizcebvnmcoziy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396423.6038237-251-14290646990647/AnsiballZ_file.py'
Nov 29 06:07:03 compute-0 sudo[32794]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:07:04 compute-0 python3.9[32796]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 06:07:04 compute-0 sudo[32794]: pam_unix(sudo:session): session closed for user root
Nov 29 06:07:04 compute-0 sudo[32946]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ufedhzusifjwwkdhpvhqxjffqwvqvzxf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396424.5279534-278-254120529265191/AnsiballZ_file.py'
Nov 29 06:07:04 compute-0 sudo[32946]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:07:05 compute-0 python3.9[32948]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 06:07:05 compute-0 sudo[32946]: pam_unix(sudo:session): session closed for user root
Nov 29 06:07:05 compute-0 sshd-session[31449]: Connection closed by invalid user test 193.32.162.157 port 59218 [preauth]
Nov 29 06:07:06 compute-0 python3.9[33098]: ansible-ansible.builtin.service_facts Invoked
Nov 29 06:07:10 compute-0 python3.9[33353]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:07:10 compute-0 python3.9[33503]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 06:07:12 compute-0 python3.9[33657]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 06:07:13 compute-0 sudo[33813]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vyuidnsohnjvsqteeznnrdaksbiupysg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396432.7789404-422-68399716274096/AnsiballZ_setup.py'
Nov 29 06:07:13 compute-0 sudo[33813]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:07:13 compute-0 python3.9[33815]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 06:07:13 compute-0 sudo[33813]: pam_unix(sudo:session): session closed for user root
Nov 29 06:07:14 compute-0 sudo[33897]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iswvgppjgzlirkqjoqyayyypjxlxviph ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396432.7789404-422-68399716274096/AnsiballZ_dnf.py'
Nov 29 06:07:14 compute-0 sudo[33897]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:07:14 compute-0 python3.9[33899]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 06:07:16 compute-0 sshd-session[33099]: Connection closed by authenticating user root 193.32.162.157 port 57856 [preauth]
Nov 29 06:07:28 compute-0 sshd-session[33960]: Connection closed by authenticating user root 193.32.162.157 port 49620 [preauth]
Nov 29 06:07:39 compute-0 sshd-session[33985]: Connection closed by authenticating user root 193.32.162.157 port 34882 [preauth]
Nov 29 06:07:45 compute-0 sshd-session[34050]: Invalid user deploy from 138.124.186.225 port 42472
Nov 29 06:07:45 compute-0 sshd-session[34050]: Received disconnect from 138.124.186.225 port 42472:11: Bye Bye [preauth]
Nov 29 06:07:45 compute-0 sshd-session[34050]: Disconnected from invalid user deploy 138.124.186.225 port 42472 [preauth]
Nov 29 06:07:49 compute-0 sshd-session[34048]: Invalid user otsmanager from 193.32.162.157 port 59660
Nov 29 06:07:51 compute-0 sshd-session[34052]: Invalid user deploy from 31.6.212.12 port 40572
Nov 29 06:07:52 compute-0 sshd-session[34052]: Received disconnect from 31.6.212.12 port 40572:11: Bye Bye [preauth]
Nov 29 06:07:52 compute-0 sshd-session[34052]: Disconnected from invalid user deploy 31.6.212.12 port 40572 [preauth]
Nov 29 06:07:52 compute-0 sshd-session[34048]: Connection closed by invalid user otsmanager 193.32.162.157 port 59660 [preauth]
Nov 29 06:07:57 compute-0 systemd[1]: Reloading.
Nov 29 06:07:57 compute-0 systemd-rc-local-generator[34108]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 06:07:58 compute-0 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Nov 29 06:07:58 compute-0 systemd[1]: Reloading.
Nov 29 06:07:58 compute-0 systemd-rc-local-generator[34152]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 06:07:58 compute-0 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Nov 29 06:07:58 compute-0 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Nov 29 06:07:58 compute-0 systemd[1]: Reloading.
Nov 29 06:07:58 compute-0 systemd-rc-local-generator[34193]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 06:07:59 compute-0 systemd[1]: Listening on LVM2 poll daemon socket.
Nov 29 06:07:59 compute-0 dbus-broker-launch[771]: Noticed file-system modification, trigger reload.
Nov 29 06:07:59 compute-0 dbus-broker-launch[771]: Noticed file-system modification, trigger reload.
Nov 29 06:07:59 compute-0 dbus-broker-launch[771]: Noticed file-system modification, trigger reload.
Nov 29 06:08:03 compute-0 sshd-session[34054]: Connection closed by authenticating user root 193.32.162.157 port 33036 [preauth]
Nov 29 06:08:05 compute-0 sshd-session[34228]: Invalid user alma from 79.116.35.29 port 57812
Nov 29 06:08:05 compute-0 sshd-session[34228]: Received disconnect from 79.116.35.29 port 57812:11: Bye Bye [preauth]
Nov 29 06:08:05 compute-0 sshd-session[34228]: Disconnected from invalid user alma 79.116.35.29 port 57812 [preauth]
Nov 29 06:08:08 compute-0 sshd-session[34241]: Invalid user zhangsan from 104.208.108.166 port 31548
Nov 29 06:08:08 compute-0 sshd-session[34239]: Invalid user test1 from 103.147.159.91 port 51486
Nov 29 06:08:09 compute-0 sshd-session[34241]: Received disconnect from 104.208.108.166 port 31548:11: Bye Bye [preauth]
Nov 29 06:08:09 compute-0 sshd-session[34241]: Disconnected from invalid user zhangsan 104.208.108.166 port 31548 [preauth]
Nov 29 06:08:09 compute-0 sshd-session[34239]: Received disconnect from 103.147.159.91 port 51486:11: Bye Bye [preauth]
Nov 29 06:08:09 compute-0 sshd-session[34239]: Disconnected from invalid user test1 103.147.159.91 port 51486 [preauth]
Nov 29 06:08:14 compute-0 sshd-session[34221]: Connection closed by authenticating user root 193.32.162.157 port 39954 [preauth]
Nov 29 06:08:26 compute-0 sshd-session[34257]: Connection closed by authenticating user root 193.32.162.157 port 59632 [preauth]
Nov 29 06:08:37 compute-0 sshd-session[34286]: Connection closed by authenticating user root 193.32.162.157 port 54796 [preauth]
Nov 29 06:08:48 compute-0 sshd-session[34329]: Connection closed by authenticating user root 193.32.162.157 port 51662 [preauth]
Nov 29 06:08:49 compute-0 sshd-session[34378]: Invalid user tempuser from 138.124.186.225 port 60192
Nov 29 06:08:49 compute-0 sshd-session[34378]: Received disconnect from 138.124.186.225 port 60192:11: Bye Bye [preauth]
Nov 29 06:08:49 compute-0 sshd-session[34378]: Disconnected from invalid user tempuser 138.124.186.225 port 60192 [preauth]
Nov 29 06:08:59 compute-0 sshd-session[34375]: Connection closed by authenticating user root 193.32.162.157 port 58772 [preauth]
Nov 29 06:08:59 compute-0 systemd[1]: Starting dnf makecache...
Nov 29 06:09:00 compute-0 dnf[34386]: Failed determining last makecache time.
Nov 29 06:09:00 compute-0 dnf[34386]: delorean-openstack-barbican-42b4c41831408a8e323 111 kB/s | 3.0 kB     00:00
Nov 29 06:09:00 compute-0 dnf[34386]: delorean-python-glean-10df0bd91b9bc5c9fd9cc02d7 175 kB/s | 3.0 kB     00:00
Nov 29 06:09:00 compute-0 dnf[34386]: delorean-openstack-cinder-1c00d6490d88e436f26ef 186 kB/s | 3.0 kB     00:00
Nov 29 06:09:00 compute-0 dnf[34386]: delorean-python-stevedore-c4acc5639fd2329372142 175 kB/s | 3.0 kB     00:00
Nov 29 06:09:00 compute-0 dnf[34386]: delorean-python-cloudkitty-tests-tempest-2c80f8 172 kB/s | 3.0 kB     00:00
Nov 29 06:09:00 compute-0 dnf[34386]: delorean-os-net-config-9758ab42364673d01bc5014e 149 kB/s | 3.0 kB     00:00
Nov 29 06:09:00 compute-0 dnf[34386]: delorean-openstack-nova-6f8decf0b4f1aa2e96292b6 192 kB/s | 3.0 kB     00:00
Nov 29 06:09:00 compute-0 dnf[34386]: delorean-python-designate-tests-tempest-347fdbc 200 kB/s | 3.0 kB     00:00
Nov 29 06:09:00 compute-0 dnf[34386]: delorean-openstack-glance-1fd12c29b339f30fe823e 197 kB/s | 3.0 kB     00:00
Nov 29 06:09:00 compute-0 dnf[34386]: delorean-openstack-keystone-e4b40af0ae3698fbbbb 196 kB/s | 3.0 kB     00:00
Nov 29 06:09:00 compute-0 dnf[34386]: delorean-openstack-manila-3c01b7181572c95dac462 192 kB/s | 3.0 kB     00:00
Nov 29 06:09:00 compute-0 dnf[34386]: delorean-python-whitebox-neutron-tests-tempest- 197 kB/s | 3.0 kB     00:00
Nov 29 06:09:00 compute-0 dnf[34386]: delorean-openstack-octavia-ba397f07a7331190208c 192 kB/s | 3.0 kB     00:00
Nov 29 06:09:00 compute-0 dnf[34386]: delorean-openstack-watcher-c014f81a8647287f6dcc 171 kB/s | 3.0 kB     00:00
Nov 29 06:09:00 compute-0 dnf[34386]: delorean-python-tcib-1124124ec06aadbac34f0d340b 189 kB/s | 3.0 kB     00:00
Nov 29 06:09:00 compute-0 dnf[34386]: delorean-puppet-ceph-7352068d7b8c84ded636ab3158 184 kB/s | 3.0 kB     00:00
Nov 29 06:09:00 compute-0 dnf[34386]: delorean-openstack-swift-dc98a8463506ac520c469a 185 kB/s | 3.0 kB     00:00
Nov 29 06:09:00 compute-0 dnf[34386]: delorean-python-tempestconf-8515371b7cceebd4282 164 kB/s | 3.0 kB     00:00
Nov 29 06:09:00 compute-0 dnf[34386]: delorean-openstack-heat-ui-013accbfd179753bc3f0 199 kB/s | 3.0 kB     00:00
Nov 29 06:09:00 compute-0 dnf[34386]: CentOS Stream 9 - BaseOS                         77 kB/s | 7.3 kB     00:00
Nov 29 06:09:00 compute-0 dnf[34386]: CentOS Stream 9 - AppStream                      33 kB/s | 7.4 kB     00:00
Nov 29 06:09:00 compute-0 dnf[34386]: CentOS Stream 9 - CRB                            70 kB/s | 7.2 kB     00:00
Nov 29 06:09:01 compute-0 dnf[34386]: CentOS Stream 9 - Extras packages                74 kB/s | 8.3 kB     00:00
Nov 29 06:09:01 compute-0 dnf[34386]: dlrn-antelope-testing                           104 kB/s | 3.0 kB     00:00
Nov 29 06:09:01 compute-0 dnf[34386]: dlrn-antelope-build-deps                        120 kB/s | 3.0 kB     00:00
Nov 29 06:09:01 compute-0 dnf[34386]: centos9-rabbitmq                                 88 kB/s | 3.0 kB     00:00
Nov 29 06:09:01 compute-0 dnf[34386]: centos9-storage                                  40 kB/s | 3.0 kB     00:00
Nov 29 06:09:01 compute-0 dnf[34386]: centos9-opstools                                 25 kB/s | 3.0 kB     00:00
Nov 29 06:09:01 compute-0 dnf[34386]: NFV SIG OpenvSwitch                             112 kB/s | 3.0 kB     00:00
Nov 29 06:09:01 compute-0 dnf[34386]: repo-setup-centos-appstream                     150 kB/s | 4.4 kB     00:00
Nov 29 06:09:01 compute-0 dnf[34386]: repo-setup-centos-baseos                        163 kB/s | 3.9 kB     00:00
Nov 29 06:09:01 compute-0 dnf[34386]: repo-setup-centos-highavailability               77 kB/s | 3.9 kB     00:00
Nov 29 06:09:01 compute-0 dnf[34386]: repo-setup-centos-powertools                    104 kB/s | 4.3 kB     00:00
Nov 29 06:09:02 compute-0 dnf[34386]: Extra Packages for Enterprise Linux 9 - x86_64  107 kB/s |  33 kB     00:00
Nov 29 06:09:02 compute-0 sshd-session[34424]: Invalid user support from 78.128.112.74 port 41870
Nov 29 06:09:02 compute-0 sshd-session[34424]: Connection closed by invalid user support 78.128.112.74 port 41870 [preauth]
Nov 29 06:09:02 compute-0 dnf[34386]: Metadata cache created.
Nov 29 06:09:02 compute-0 systemd[1]: dnf-makecache.service: Deactivated successfully.
Nov 29 06:09:02 compute-0 systemd[1]: Finished dnf makecache.
Nov 29 06:09:02 compute-0 systemd[1]: dnf-makecache.service: Consumed 1.814s CPU time.
Nov 29 06:09:10 compute-0 kernel: SELinux:  Converting 2718 SID table entries...
Nov 29 06:09:10 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Nov 29 06:09:10 compute-0 kernel: SELinux:  policy capability open_perms=1
Nov 29 06:09:10 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Nov 29 06:09:10 compute-0 kernel: SELinux:  policy capability always_check_network=0
Nov 29 06:09:10 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 29 06:09:10 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 29 06:09:10 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 29 06:09:10 compute-0 sshd-session[34472]: Invalid user ubuntu from 79.116.35.29 port 57128
Nov 29 06:09:11 compute-0 sshd-session[34387]: Connection closed by authenticating user root 193.32.162.157 port 54450 [preauth]
Nov 29 06:09:11 compute-0 sshd-session[34472]: Received disconnect from 79.116.35.29 port 57128:11: Bye Bye [preauth]
Nov 29 06:09:11 compute-0 sshd-session[34472]: Disconnected from invalid user ubuntu 79.116.35.29 port 57128 [preauth]
Nov 29 06:09:11 compute-0 dbus-broker-launch[778]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Nov 29 06:09:11 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 29 06:09:11 compute-0 systemd[1]: Starting man-db-cache-update.service...
Nov 29 06:09:11 compute-0 systemd[1]: Reloading.
Nov 29 06:09:11 compute-0 systemd-rc-local-generator[34583]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 06:09:11 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 29 06:09:12 compute-0 sudo[33897]: pam_unix(sudo:session): session closed for user root
Nov 29 06:09:12 compute-0 sshd-session[34541]: Received disconnect from 115.190.37.201 port 51080:11: Bye Bye [preauth]
Nov 29 06:09:12 compute-0 sshd-session[34541]: Disconnected from authenticating user root 115.190.37.201 port 51080 [preauth]
Nov 29 06:09:12 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 29 06:09:12 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 29 06:09:12 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1.349s CPU time.
Nov 29 06:09:12 compute-0 systemd[1]: run-r8c6adfed0c3f46b9b28c6b687f452354.service: Deactivated successfully.
Nov 29 06:09:14 compute-0 sshd-session[35369]: Invalid user stperez from 31.6.212.12 port 49102
Nov 29 06:09:14 compute-0 sshd-session[35369]: Received disconnect from 31.6.212.12 port 49102:11: Bye Bye [preauth]
Nov 29 06:09:14 compute-0 sshd-session[35369]: Disconnected from invalid user stperez 31.6.212.12 port 49102 [preauth]
Nov 29 06:09:18 compute-0 sudo[35496]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wpiryewxipngobypxlwbxuikavysuonz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396556.144097-458-107463714171977/AnsiballZ_command.py'
Nov 29 06:09:18 compute-0 sudo[35496]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:09:18 compute-0 python3.9[35498]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:09:19 compute-0 sudo[35496]: pam_unix(sudo:session): session closed for user root
Nov 29 06:09:20 compute-0 sudo[35777]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fakbofzfotyhqhmiwvbylfwqcaapohcw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396559.921944-482-91175031742041/AnsiballZ_selinux.py'
Nov 29 06:09:21 compute-0 sudo[35777]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:09:21 compute-0 python3.9[35779]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Nov 29 06:09:21 compute-0 sudo[35777]: pam_unix(sudo:session): session closed for user root
Nov 29 06:09:22 compute-0 sudo[35931]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-petdaumjcvplzbewyrfzhbscrfvpzlbl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396562.2060995-515-210265228543811/AnsiballZ_command.py'
Nov 29 06:09:22 compute-0 sudo[35931]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:09:22 compute-0 python3.9[35933]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Nov 29 06:09:23 compute-0 sshd-session[34547]: Connection closed by authenticating user root 193.32.162.157 port 55714 [preauth]
Nov 29 06:09:23 compute-0 sshd-session[35804]: Invalid user test1 from 104.208.108.166 port 27310
Nov 29 06:09:23 compute-0 sshd-session[35804]: Received disconnect from 104.208.108.166 port 27310:11: Bye Bye [preauth]
Nov 29 06:09:23 compute-0 sshd-session[35804]: Disconnected from invalid user test1 104.208.108.166 port 27310 [preauth]
Nov 29 06:09:23 compute-0 sudo[35931]: pam_unix(sudo:session): session closed for user root
Nov 29 06:09:25 compute-0 sudo[36086]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rcsgpgtjmzhpgyjmlcezkfhmggrgcoug ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396565.1348147-539-54220061639911/AnsiballZ_file.py'
Nov 29 06:09:25 compute-0 sudo[36086]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:09:26 compute-0 python3.9[36088]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:09:26 compute-0 sudo[36086]: pam_unix(sudo:session): session closed for user root
Nov 29 06:09:26 compute-0 sudo[36238]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pnrsrbhrxpwbhcniwfoqkzeppjvxpdds ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396566.314226-563-76010166927250/AnsiballZ_mount.py'
Nov 29 06:09:26 compute-0 sudo[36238]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:09:27 compute-0 python3.9[36240]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Nov 29 06:09:27 compute-0 sudo[36238]: pam_unix(sudo:session): session closed for user root
Nov 29 06:09:29 compute-0 sudo[36390]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ggeapgsllwzxytasvcitncchoujkvsdr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396569.42723-647-56464860535886/AnsiballZ_file.py'
Nov 29 06:09:29 compute-0 sudo[36390]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:09:31 compute-0 sshd-session[36393]: Invalid user test1 from 103.147.159.91 port 51614
Nov 29 06:09:32 compute-0 python3.9[36392]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 06:09:32 compute-0 sudo[36390]: pam_unix(sudo:session): session closed for user root
Nov 29 06:09:32 compute-0 sshd-session[36393]: Received disconnect from 103.147.159.91 port 51614:11: Bye Bye [preauth]
Nov 29 06:09:32 compute-0 sshd-session[36393]: Disconnected from invalid user test1 103.147.159.91 port 51614 [preauth]
Nov 29 06:09:32 compute-0 sudo[36544]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zwlhfzwxrxkfjtqarompkfingraijggy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396572.401859-671-253087301284542/AnsiballZ_stat.py'
Nov 29 06:09:32 compute-0 sudo[36544]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:09:32 compute-0 python3.9[36546]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:09:33 compute-0 sudo[36544]: pam_unix(sudo:session): session closed for user root
Nov 29 06:09:33 compute-0 sudo[36667]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pfnlzjvblootffctsulyvroabnmewbbo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396572.401859-671-253087301284542/AnsiballZ_copy.py'
Nov 29 06:09:33 compute-0 sudo[36667]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:09:34 compute-0 sshd-session[35935]: Connection closed by authenticating user root 193.32.162.157 port 36516 [preauth]
Nov 29 06:09:36 compute-0 python3.9[36669]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764396572.401859-671-253087301284542/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=3385b01217fece5877d0a0cc7f45f60761b1d6d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:09:36 compute-0 sudo[36667]: pam_unix(sudo:session): session closed for user root
Nov 29 06:09:38 compute-0 sudo[36821]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-micucxzmrcmiurdbsldlcnoexrhgeoro ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396578.2960267-743-158420437693314/AnsiballZ_stat.py'
Nov 29 06:09:38 compute-0 sudo[36821]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:09:38 compute-0 python3.9[36823]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 06:09:38 compute-0 sudo[36821]: pam_unix(sudo:session): session closed for user root
Nov 29 06:09:39 compute-0 sudo[36973]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bplkcopcuwzhgkmeifycrpapcrtomcao ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396579.1027162-767-176284240398159/AnsiballZ_command.py'
Nov 29 06:09:39 compute-0 sudo[36973]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:09:39 compute-0 python3.9[36975]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:09:39 compute-0 sudo[36973]: pam_unix(sudo:session): session closed for user root
Nov 29 06:09:40 compute-0 sudo[37126]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pelemlhjiyhffhexaqvfqzzcroizccvi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396580.0692196-791-69141429685628/AnsiballZ_file.py'
Nov 29 06:09:40 compute-0 sudo[37126]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:09:40 compute-0 python3.9[37128]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:09:40 compute-0 sudo[37126]: pam_unix(sudo:session): session closed for user root
Nov 29 06:09:41 compute-0 sudo[37278]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-msidwtqznhawflbnfuzvhbeagsyvlwng ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396581.2014484-824-84583972980264/AnsiballZ_getent.py'
Nov 29 06:09:41 compute-0 sudo[37278]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:09:41 compute-0 python3.9[37280]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Nov 29 06:09:41 compute-0 sudo[37278]: pam_unix(sudo:session): session closed for user root
Nov 29 06:09:41 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 06:09:42 compute-0 sudo[37432]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zafxpomgkwlihbvblrhqepyvyivhramn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396582.1980672-848-54813291575421/AnsiballZ_group.py'
Nov 29 06:09:42 compute-0 sudo[37432]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:09:43 compute-0 python3.9[37434]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 29 06:09:43 compute-0 groupadd[37435]: group added to /etc/group: name=qemu, GID=107
Nov 29 06:09:43 compute-0 groupadd[37435]: group added to /etc/gshadow: name=qemu
Nov 29 06:09:43 compute-0 groupadd[37435]: new group: name=qemu, GID=107
Nov 29 06:09:43 compute-0 sudo[37432]: pam_unix(sudo:session): session closed for user root
Nov 29 06:09:43 compute-0 sshd-session[36670]: Invalid user system from 193.32.162.157 port 41726
Nov 29 06:09:44 compute-0 sudo[37590]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mfcfldfeuthghbzuxcgdcnuukwjyjkzs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396583.4907527-872-234901811453531/AnsiballZ_user.py'
Nov 29 06:09:44 compute-0 sudo[37590]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:09:44 compute-0 python3.9[37592]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 29 06:09:44 compute-0 useradd[37594]: new user: name=qemu, UID=107, GID=107, home=/home/qemu, shell=/sbin/nologin, from=/dev/pts/0
Nov 29 06:09:44 compute-0 sudo[37590]: pam_unix(sudo:session): session closed for user root
Nov 29 06:09:45 compute-0 sudo[37750]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-icojfjscznklfocnemoyuacipuiudmjm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396584.9438348-896-91875983674151/AnsiballZ_getent.py'
Nov 29 06:09:45 compute-0 sudo[37750]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:09:45 compute-0 sshd-session[36670]: Connection closed by invalid user system 193.32.162.157 port 41726 [preauth]
Nov 29 06:09:45 compute-0 python3.9[37752]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Nov 29 06:09:45 compute-0 sudo[37750]: pam_unix(sudo:session): session closed for user root
Nov 29 06:09:46 compute-0 sudo[37904]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jaizqcapzjmedyejenubczkgwbwxfgek ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396585.8108008-920-83468355130186/AnsiballZ_group.py'
Nov 29 06:09:46 compute-0 sudo[37904]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:09:46 compute-0 python3.9[37906]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 29 06:09:46 compute-0 groupadd[37907]: group added to /etc/group: name=hugetlbfs, GID=42477
Nov 29 06:09:46 compute-0 groupadd[37907]: group added to /etc/gshadow: name=hugetlbfs
Nov 29 06:09:46 compute-0 groupadd[37907]: new group: name=hugetlbfs, GID=42477
Nov 29 06:09:46 compute-0 sudo[37904]: pam_unix(sudo:session): session closed for user root
Nov 29 06:09:47 compute-0 sudo[38062]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jaukaxefgqwnhggtlajygnwtmwjsrsvc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396586.7564116-947-66229076915681/AnsiballZ_file.py'
Nov 29 06:09:47 compute-0 sudo[38062]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:09:47 compute-0 python3.9[38064]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Nov 29 06:09:47 compute-0 sudo[38062]: pam_unix(sudo:session): session closed for user root
Nov 29 06:09:48 compute-0 sudo[38215]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmeboizywpjnkhxoainbzdirqzvomlva ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396587.8276722-980-116111023359913/AnsiballZ_dnf.py'
Nov 29 06:09:48 compute-0 sudo[38215]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:09:48 compute-0 python3.9[38217]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 06:09:50 compute-0 sudo[38215]: pam_unix(sudo:session): session closed for user root
Nov 29 06:09:51 compute-0 sudo[38368]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vnuvhfgsjcvseprhkpltgzhdruvvyhsu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396590.6208293-1004-176034005096926/AnsiballZ_file.py'
Nov 29 06:09:51 compute-0 sudo[38368]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:09:51 compute-0 python3.9[38370]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 06:09:51 compute-0 sudo[38368]: pam_unix(sudo:session): session closed for user root
Nov 29 06:09:51 compute-0 sudo[38520]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yfadaftcxqbfpmpzisjthineltfdvbii ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396591.4921277-1028-266701471914340/AnsiballZ_stat.py'
Nov 29 06:09:51 compute-0 sudo[38520]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:09:52 compute-0 python3.9[38522]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:09:52 compute-0 sudo[38520]: pam_unix(sudo:session): session closed for user root
Nov 29 06:09:52 compute-0 sudo[38643]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dwbunaxmdckmsogdexnhclzwvnsvkhuo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396591.4921277-1028-266701471914340/AnsiballZ_copy.py'
Nov 29 06:09:52 compute-0 sudo[38643]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:09:52 compute-0 python3.9[38645]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764396591.4921277-1028-266701471914340/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 29 06:09:52 compute-0 sudo[38643]: pam_unix(sudo:session): session closed for user root
Nov 29 06:09:53 compute-0 sudo[38797]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sonvgdeyhygovqpgqlllimflwumphhvg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396593.0400133-1073-93586475432145/AnsiballZ_systemd.py'
Nov 29 06:09:53 compute-0 sudo[38797]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:09:53 compute-0 sshd-session[38722]: Received disconnect from 138.124.186.225 port 37058:11: Bye Bye [preauth]
Nov 29 06:09:53 compute-0 sshd-session[38722]: Disconnected from authenticating user root 138.124.186.225 port 37058 [preauth]
Nov 29 06:09:54 compute-0 python3.9[38799]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 06:09:54 compute-0 systemd[1]: Starting Load Kernel Modules...
Nov 29 06:09:54 compute-0 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Nov 29 06:09:54 compute-0 kernel: Bridge firewalling registered
Nov 29 06:09:54 compute-0 systemd-modules-load[38803]: Inserted module 'br_netfilter'
Nov 29 06:09:54 compute-0 systemd[1]: Finished Load Kernel Modules.
Nov 29 06:09:54 compute-0 sudo[38797]: pam_unix(sudo:session): session closed for user root
Nov 29 06:09:54 compute-0 sudo[38957]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iczkvvbpyxvpailqgbzyjjvdstgaegmz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396594.6039958-1097-228041779108748/AnsiballZ_stat.py'
Nov 29 06:09:54 compute-0 sudo[38957]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:09:55 compute-0 python3.9[38959]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:09:55 compute-0 sudo[38957]: pam_unix(sudo:session): session closed for user root
Nov 29 06:09:55 compute-0 sudo[39080]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jlubyfdizriszrmuiqdvbboshbpnqcps ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396594.6039958-1097-228041779108748/AnsiballZ_copy.py'
Nov 29 06:09:55 compute-0 sudo[39080]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:09:55 compute-0 python3.9[39082]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764396594.6039958-1097-228041779108748/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 29 06:09:55 compute-0 sudo[39080]: pam_unix(sudo:session): session closed for user root
Nov 29 06:09:56 compute-0 sudo[39232]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cexzwbzdxnvtmwdocewauwoockjyebyi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396596.359957-1151-10262337354778/AnsiballZ_dnf.py'
Nov 29 06:09:56 compute-0 sudo[39232]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:09:56 compute-0 sshd-session[37754]: Connection closed by authenticating user root 193.32.162.157 port 60950 [preauth]
Nov 29 06:09:56 compute-0 python3.9[39234]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 06:10:00 compute-0 dbus-broker-launch[771]: Noticed file-system modification, trigger reload.
Nov 29 06:10:00 compute-0 dbus-broker-launch[771]: Noticed file-system modification, trigger reload.
Nov 29 06:10:01 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 29 06:10:01 compute-0 systemd[1]: Starting man-db-cache-update.service...
Nov 29 06:10:01 compute-0 systemd[1]: Reloading.
Nov 29 06:10:01 compute-0 systemd-rc-local-generator[39295]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 06:10:01 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 29 06:10:02 compute-0 sudo[39232]: pam_unix(sudo:session): session closed for user root
Nov 29 06:10:04 compute-0 python3.9[41331]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 06:10:05 compute-0 python3.9[42282]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Nov 29 06:10:06 compute-0 python3.9[43103]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 06:10:06 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 29 06:10:06 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 29 06:10:06 compute-0 systemd[1]: man-db-cache-update.service: Consumed 5.836s CPU time.
Nov 29 06:10:06 compute-0 systemd[1]: run-rcc745b10e61e4ca18fd82697e7a2feff.service: Deactivated successfully.
Nov 29 06:10:07 compute-0 sudo[43463]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yoyacbpwobulkfgyroyxnfrxujhzinfj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396606.8454719-1268-148206587011350/AnsiballZ_command.py'
Nov 29 06:10:07 compute-0 sudo[43463]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:10:07 compute-0 python3.9[43465]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:10:07 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Nov 29 06:10:08 compute-0 sshd-session[39236]: Connection closed by authenticating user root 193.32.162.157 port 39492 [preauth]
Nov 29 06:10:08 compute-0 systemd[1]: Starting Authorization Manager...
Nov 29 06:10:08 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Nov 29 06:10:08 compute-0 polkitd[43682]: Started polkitd version 0.117
Nov 29 06:10:08 compute-0 polkitd[43682]: Loading rules from directory /etc/polkit-1/rules.d
Nov 29 06:10:08 compute-0 polkitd[43682]: Loading rules from directory /usr/share/polkit-1/rules.d
Nov 29 06:10:08 compute-0 polkitd[43682]: Finished loading, compiling and executing 2 rules
Nov 29 06:10:08 compute-0 polkitd[43682]: Acquired the name org.freedesktop.PolicyKit1 on the system bus
Nov 29 06:10:08 compute-0 systemd[1]: Started Authorization Manager.
Nov 29 06:10:08 compute-0 sudo[43463]: pam_unix(sudo:session): session closed for user root
Nov 29 06:10:09 compute-0 sudo[43851]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvfrwbyoccqpqvbsvegfsoknciitnbjt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396608.5795445-1295-164477583768259/AnsiballZ_systemd.py'
Nov 29 06:10:09 compute-0 sudo[43851]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:10:09 compute-0 python3.9[43853]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 06:10:09 compute-0 systemd[1]: Stopping Dynamic System Tuning Daemon...
Nov 29 06:10:09 compute-0 systemd[1]: tuned.service: Deactivated successfully.
Nov 29 06:10:09 compute-0 systemd[1]: Stopped Dynamic System Tuning Daemon.
Nov 29 06:10:09 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Nov 29 06:10:09 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Nov 29 06:10:09 compute-0 sudo[43851]: pam_unix(sudo:session): session closed for user root
Nov 29 06:10:10 compute-0 python3.9[44016]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Nov 29 06:10:14 compute-0 sudo[44166]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sukwyieueygeeiwguiookewqnolrcspe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396613.9998276-1466-216353616109266/AnsiballZ_systemd.py'
Nov 29 06:10:14 compute-0 sudo[44166]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:10:14 compute-0 python3.9[44168]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 06:10:14 compute-0 systemd[1]: Reloading.
Nov 29 06:10:14 compute-0 systemd-rc-local-generator[44193]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 06:10:15 compute-0 sudo[44166]: pam_unix(sudo:session): session closed for user root
Nov 29 06:10:15 compute-0 sudo[44355]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fsiiikcttwqcphczibxvtqsmlrvqvuqs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396615.214969-1466-277643175078614/AnsiballZ_systemd.py'
Nov 29 06:10:15 compute-0 sudo[44355]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:10:15 compute-0 python3.9[44357]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 06:10:15 compute-0 systemd[1]: Reloading.
Nov 29 06:10:16 compute-0 systemd-rc-local-generator[44388]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 06:10:16 compute-0 sudo[44355]: pam_unix(sudo:session): session closed for user root
Nov 29 06:10:16 compute-0 sudo[44544]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aqjfdwclnvjkcfxzdhjwwvaryulilics ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396616.5202706-1514-278800873678241/AnsiballZ_command.py'
Nov 29 06:10:16 compute-0 sudo[44544]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:10:17 compute-0 sshd-session[43699]: Invalid user dev from 193.32.162.157 port 52692
Nov 29 06:10:17 compute-0 python3.9[44546]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:10:17 compute-0 sudo[44544]: pam_unix(sudo:session): session closed for user root
Nov 29 06:10:17 compute-0 sudo[44697]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-temeirnhnjshmxyubmnopbfowphmpgdf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396617.3670964-1538-116989959618469/AnsiballZ_command.py'
Nov 29 06:10:17 compute-0 sudo[44697]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:10:17 compute-0 python3.9[44699]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:10:17 compute-0 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Nov 29 06:10:18 compute-0 sudo[44697]: pam_unix(sudo:session): session closed for user root
Nov 29 06:10:18 compute-0 sudo[44850]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uvyhkrhaksrgnjqmkwbbewfrtnnrzwwf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396618.2251139-1562-207769747738021/AnsiballZ_command.py'
Nov 29 06:10:18 compute-0 sudo[44850]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:10:18 compute-0 python3.9[44852]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:10:19 compute-0 sshd-session[44853]: Invalid user stperez from 79.116.35.29 port 56440
Nov 29 06:10:19 compute-0 sshd-session[44853]: Received disconnect from 79.116.35.29 port 56440:11: Bye Bye [preauth]
Nov 29 06:10:19 compute-0 sshd-session[44853]: Disconnected from invalid user stperez 79.116.35.29 port 56440 [preauth]
Nov 29 06:10:19 compute-0 sshd-session[43699]: Connection closed by invalid user dev 193.32.162.157 port 52692 [preauth]
Nov 29 06:10:20 compute-0 sudo[44850]: pam_unix(sudo:session): session closed for user root
Nov 29 06:10:20 compute-0 sudo[45017]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjhmacaycqiochzqokrmxribieeturib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396620.5439754-1586-97455490236495/AnsiballZ_command.py'
Nov 29 06:10:20 compute-0 sudo[45017]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:10:21 compute-0 sshd-session[44864]: Invalid user bodega from 115.190.37.201 port 45684
Nov 29 06:10:21 compute-0 python3.9[45019]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:10:21 compute-0 sudo[45017]: pam_unix(sudo:session): session closed for user root
Nov 29 06:10:21 compute-0 sshd-session[44864]: Received disconnect from 115.190.37.201 port 45684:11: Bye Bye [preauth]
Nov 29 06:10:21 compute-0 sshd-session[44864]: Disconnected from invalid user bodega 115.190.37.201 port 45684 [preauth]
Nov 29 06:10:21 compute-0 sudo[45171]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-krtybbbbbsbhohngxdndvxpznwhuoaqf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396621.4859889-1610-262159146238539/AnsiballZ_systemd.py'
Nov 29 06:10:21 compute-0 sudo[45171]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:10:22 compute-0 python3.9[45173]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 06:10:22 compute-0 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Nov 29 06:10:22 compute-0 systemd[1]: Stopped Apply Kernel Variables.
Nov 29 06:10:22 compute-0 systemd[1]: Stopping Apply Kernel Variables...
Nov 29 06:10:22 compute-0 systemd[1]: Starting Apply Kernel Variables...
Nov 29 06:10:22 compute-0 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Nov 29 06:10:22 compute-0 systemd[1]: Finished Apply Kernel Variables.
Nov 29 06:10:22 compute-0 sudo[45171]: pam_unix(sudo:session): session closed for user root
Nov 29 06:10:22 compute-0 sshd-session[31425]: Connection closed by 192.168.122.30 port 37420
Nov 29 06:10:22 compute-0 sshd-session[31422]: pam_unix(sshd:session): session closed for user zuul
Nov 29 06:10:22 compute-0 systemd[1]: session-9.scope: Deactivated successfully.
Nov 29 06:10:22 compute-0 systemd[1]: session-9.scope: Consumed 2min 18.731s CPU time.
Nov 29 06:10:22 compute-0 systemd-logind[797]: Session 9 logged out. Waiting for processes to exit.
Nov 29 06:10:22 compute-0 systemd-logind[797]: Removed session 9.
Nov 29 06:10:28 compute-0 sshd-session[45203]: Accepted publickey for zuul from 192.168.122.30 port 57066 ssh2: ECDSA SHA256:q0RMlXdalxA6snNWza7TmIndlwLWLLpO+sXhiGKqO/I
Nov 29 06:10:28 compute-0 systemd-logind[797]: New session 10 of user zuul.
Nov 29 06:10:28 compute-0 systemd[1]: Started Session 10 of User zuul.
Nov 29 06:10:28 compute-0 sshd-session[45203]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 06:10:30 compute-0 python3.9[45356]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 06:10:30 compute-0 sshd-session[44861]: Connection closed by authenticating user root 193.32.162.157 port 33760 [preauth]
Nov 29 06:10:31 compute-0 sudo[45511]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ubrspvjshnllucxzivtahuzhjknxwdam ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396631.249451-73-267710712724495/AnsiballZ_getent.py'
Nov 29 06:10:31 compute-0 sudo[45511]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:10:31 compute-0 python3.9[45513]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Nov 29 06:10:31 compute-0 sudo[45511]: pam_unix(sudo:session): session closed for user root
Nov 29 06:10:32 compute-0 sudo[45664]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ynqggztcclbtziyofwqnwartpoaeozqs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396632.1883597-97-58484931132943/AnsiballZ_group.py'
Nov 29 06:10:32 compute-0 sudo[45664]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:10:32 compute-0 python3.9[45666]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 29 06:10:33 compute-0 groupadd[45667]: group added to /etc/group: name=openvswitch, GID=42476
Nov 29 06:10:33 compute-0 groupadd[45667]: group added to /etc/gshadow: name=openvswitch
Nov 29 06:10:33 compute-0 groupadd[45667]: new group: name=openvswitch, GID=42476
Nov 29 06:10:33 compute-0 sudo[45664]: pam_unix(sudo:session): session closed for user root
Nov 29 06:10:34 compute-0 sudo[45823]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-srpkvzwgzlrwltnsvuvvaunjjfaceucm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396633.5813587-121-83566937771127/AnsiballZ_user.py'
Nov 29 06:10:34 compute-0 sudo[45823]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:10:34 compute-0 python3.9[45825]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 29 06:10:34 compute-0 useradd[45827]: new user: name=openvswitch, UID=42476, GID=42476, home=/home/openvswitch, shell=/sbin/nologin, from=/dev/pts/0
Nov 29 06:10:34 compute-0 useradd[45827]: add 'openvswitch' to group 'hugetlbfs'
Nov 29 06:10:34 compute-0 useradd[45827]: add 'openvswitch' to shadow group 'hugetlbfs'
Nov 29 06:10:34 compute-0 sudo[45823]: pam_unix(sudo:session): session closed for user root
Nov 29 06:10:35 compute-0 sudo[45983]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wdefemqgyvjmahssicrvylrgtpdmjznl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396635.0830793-151-37611822071383/AnsiballZ_setup.py'
Nov 29 06:10:35 compute-0 sudo[45983]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:10:35 compute-0 python3.9[45985]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 06:10:36 compute-0 sudo[45983]: pam_unix(sudo:session): session closed for user root
Nov 29 06:10:36 compute-0 sudo[46069]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-abdoshcuqzvwmaehcyzznkchrtjakgms ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396635.0830793-151-37611822071383/AnsiballZ_dnf.py'
Nov 29 06:10:36 compute-0 sudo[46069]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:10:36 compute-0 python3.9[46071]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 29 06:10:37 compute-0 sshd-session[45994]: Invalid user test1 from 104.208.108.166 port 43786
Nov 29 06:10:37 compute-0 sshd-session[45994]: Received disconnect from 104.208.108.166 port 43786:11: Bye Bye [preauth]
Nov 29 06:10:37 compute-0 sshd-session[45994]: Disconnected from invalid user test1 104.208.108.166 port 43786 [preauth]
Nov 29 06:10:38 compute-0 sudo[46069]: pam_unix(sudo:session): session closed for user root
Nov 29 06:10:39 compute-0 sshd-session[46090]: Invalid user marvin from 31.6.212.12 port 47172
Nov 29 06:10:39 compute-0 sshd-session[46090]: Received disconnect from 31.6.212.12 port 47172:11: Bye Bye [preauth]
Nov 29 06:10:39 compute-0 sshd-session[46090]: Disconnected from invalid user marvin 31.6.212.12 port 47172 [preauth]
Nov 29 06:10:40 compute-0 sudo[46235]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rbuzebqbnyxvxnlaubqlwcozzpfzdaua ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396640.328079-193-249588535667282/AnsiballZ_dnf.py'
Nov 29 06:10:40 compute-0 sudo[46235]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:10:40 compute-0 python3.9[46237]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 06:10:42 compute-0 sshd-session[45385]: Connection closed by authenticating user root 193.32.162.157 port 38066 [preauth]
Nov 29 06:10:48 compute-0 sshd-session[46241]: Invalid user daniel from 193.32.162.157 port 56312
Nov 29 06:10:51 compute-0 sshd-session[46241]: Connection closed by invalid user daniel 193.32.162.157 port 56312 [preauth]
Nov 29 06:10:52 compute-0 kernel: SELinux:  Converting 2730 SID table entries...
Nov 29 06:10:52 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Nov 29 06:10:52 compute-0 kernel: SELinux:  policy capability open_perms=1
Nov 29 06:10:52 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Nov 29 06:10:52 compute-0 kernel: SELinux:  policy capability always_check_network=0
Nov 29 06:10:52 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 29 06:10:52 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 29 06:10:52 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 29 06:10:52 compute-0 groupadd[46262]: group added to /etc/group: name=unbound, GID=993
Nov 29 06:10:52 compute-0 groupadd[46262]: group added to /etc/gshadow: name=unbound
Nov 29 06:10:52 compute-0 groupadd[46262]: new group: name=unbound, GID=993
Nov 29 06:10:52 compute-0 useradd[46269]: new user: name=unbound, UID=993, GID=993, home=/var/lib/unbound, shell=/sbin/nologin, from=none
Nov 29 06:10:52 compute-0 dbus-broker-launch[778]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Nov 29 06:10:52 compute-0 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Nov 29 06:10:54 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 29 06:10:54 compute-0 systemd[1]: Starting man-db-cache-update.service...
Nov 29 06:10:54 compute-0 systemd[1]: Reloading.
Nov 29 06:10:54 compute-0 systemd-rc-local-generator[46770]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 06:10:54 compute-0 systemd-sysv-generator[46774]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 06:10:54 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 29 06:10:55 compute-0 sudo[46235]: pam_unix(sudo:session): session closed for user root
Nov 29 06:10:55 compute-0 sshd-session[46848]: Invalid user deploy from 138.124.186.225 port 34782
Nov 29 06:10:55 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 29 06:10:55 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 29 06:10:55 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1.041s CPU time.
Nov 29 06:10:55 compute-0 systemd[1]: run-r78dac5bb0fa74677999dca655113ca93.service: Deactivated successfully.
Nov 29 06:10:55 compute-0 sshd-session[46848]: Received disconnect from 138.124.186.225 port 34782:11: Bye Bye [preauth]
Nov 29 06:10:55 compute-0 sshd-session[46848]: Disconnected from invalid user deploy 138.124.186.225 port 34782 [preauth]
Nov 29 06:10:56 compute-0 sshd-session[46806]: Received disconnect from 103.147.159.91 port 51732:11: Bye Bye [preauth]
Nov 29 06:10:56 compute-0 sshd-session[46806]: Disconnected from authenticating user root 103.147.159.91 port 51732 [preauth]
Nov 29 06:10:58 compute-0 sudo[47340]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eiqzisrwobfawvehkqqujumfcbryvkih ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396658.2770941-217-171475329008327/AnsiballZ_systemd.py'
Nov 29 06:10:58 compute-0 sudo[47340]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:10:59 compute-0 python3.9[47342]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 29 06:10:59 compute-0 systemd[1]: Reloading.
Nov 29 06:10:59 compute-0 systemd-sysv-generator[47376]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 06:10:59 compute-0 systemd-rc-local-generator[47370]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 06:10:59 compute-0 systemd[1]: Starting Open vSwitch Database Unit...
Nov 29 06:10:59 compute-0 chown[47384]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Nov 29 06:10:59 compute-0 ovs-ctl[47389]: /etc/openvswitch/conf.db does not exist ... (warning).
Nov 29 06:10:59 compute-0 ovs-ctl[47389]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Nov 29 06:10:59 compute-0 ovs-ctl[47389]: Starting ovsdb-server [  OK  ]
Nov 29 06:10:59 compute-0 ovs-vsctl[47438]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Nov 29 06:10:59 compute-0 ovs-vsctl[47454]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"93db784b-4e42-404a-b548-49ad165fd917\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Nov 29 06:11:00 compute-0 ovs-ctl[47389]: Configuring Open vSwitch system IDs [  OK  ]
Nov 29 06:11:00 compute-0 ovs-vsctl[47463]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Nov 29 06:11:00 compute-0 ovs-ctl[47389]: Enabling remote OVSDB managers [  OK  ]
Nov 29 06:11:00 compute-0 systemd[1]: Started Open vSwitch Database Unit.
Nov 29 06:11:00 compute-0 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Nov 29 06:11:00 compute-0 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Nov 29 06:11:00 compute-0 systemd[1]: Starting Open vSwitch Forwarding Unit...
Nov 29 06:11:00 compute-0 kernel: openvswitch: Open vSwitch switching datapath
Nov 29 06:11:00 compute-0 ovs-ctl[47507]: Inserting openvswitch module [  OK  ]
Nov 29 06:11:00 compute-0 ovs-ctl[47476]: Starting ovs-vswitchd [  OK  ]
Nov 29 06:11:00 compute-0 ovs-ctl[47476]: Enabling remote OVSDB managers [  OK  ]
Nov 29 06:11:00 compute-0 ovs-vsctl[47525]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Nov 29 06:11:00 compute-0 systemd[1]: Started Open vSwitch Forwarding Unit.
Nov 29 06:11:00 compute-0 systemd[1]: Starting Open vSwitch...
Nov 29 06:11:00 compute-0 systemd[1]: Finished Open vSwitch.
Nov 29 06:11:00 compute-0 sudo[47340]: pam_unix(sudo:session): session closed for user root
Nov 29 06:11:01 compute-0 python3.9[47676]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 06:11:02 compute-0 sudo[47826]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pwyitxpuajurbppwgfmeiptcudrjdjiu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396661.8175142-271-234087539773372/AnsiballZ_sefcontext.py'
Nov 29 06:11:02 compute-0 sudo[47826]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:11:02 compute-0 python3.9[47828]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Nov 29 06:11:03 compute-0 kernel: SELinux:  Converting 2744 SID table entries...
Nov 29 06:11:03 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Nov 29 06:11:03 compute-0 kernel: SELinux:  policy capability open_perms=1
Nov 29 06:11:03 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Nov 29 06:11:03 compute-0 kernel: SELinux:  policy capability always_check_network=0
Nov 29 06:11:03 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 29 06:11:03 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 29 06:11:03 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 29 06:11:03 compute-0 sshd-session[46284]: Connection closed by authenticating user root 193.32.162.157 port 59910 [preauth]
Nov 29 06:11:03 compute-0 sudo[47826]: pam_unix(sudo:session): session closed for user root
Nov 29 06:11:05 compute-0 python3.9[47985]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 06:11:05 compute-0 sudo[48141]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dxwnigddqrkqzqkpmfmsrqwtobipfpbq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396665.5764925-325-63182087384027/AnsiballZ_dnf.py'
Nov 29 06:11:05 compute-0 dbus-broker-launch[778]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Nov 29 06:11:05 compute-0 sudo[48141]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:11:06 compute-0 python3.9[48143]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 06:11:07 compute-0 sudo[48141]: pam_unix(sudo:session): session closed for user root
Nov 29 06:11:08 compute-0 sudo[48295]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jlbwcijjbwklonhiuexgrhxewxutjtft ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396667.836319-349-142757944713367/AnsiballZ_command.py'
Nov 29 06:11:08 compute-0 sudo[48295]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:11:08 compute-0 python3.9[48297]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:11:09 compute-0 sudo[48295]: pam_unix(sudo:session): session closed for user root
Nov 29 06:11:10 compute-0 sudo[48582]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tkilyltngueykausboeomhhrmbzpsgtl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396669.5801687-373-97125722305673/AnsiballZ_file.py'
Nov 29 06:11:10 compute-0 sudo[48582]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:11:10 compute-0 python3.9[48584]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 29 06:11:10 compute-0 sudo[48582]: pam_unix(sudo:session): session closed for user root
Nov 29 06:11:11 compute-0 python3.9[48734]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 06:11:11 compute-0 sudo[48886]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ftysxyzcghbadkpsipvknzhbkmukemcn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396671.5559936-421-264329635400788/AnsiballZ_dnf.py'
Nov 29 06:11:11 compute-0 sudo[48886]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:11:12 compute-0 python3.9[48888]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 06:11:13 compute-0 sshd-session[47836]: Invalid user guest from 193.32.162.157 port 47220
Nov 29 06:11:13 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 29 06:11:14 compute-0 systemd[1]: Starting man-db-cache-update.service...
Nov 29 06:11:14 compute-0 systemd[1]: Reloading.
Nov 29 06:11:14 compute-0 systemd-rc-local-generator[48918]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 06:11:14 compute-0 systemd-sysv-generator[48924]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 06:11:14 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 29 06:11:14 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 29 06:11:14 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 29 06:11:14 compute-0 systemd[1]: run-r3c5409f3773c45b9a943bc3a655a1d38.service: Deactivated successfully.
Nov 29 06:11:14 compute-0 sudo[48886]: pam_unix(sudo:session): session closed for user root
Nov 29 06:11:15 compute-0 sudo[49203]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-phcmsvauahdwcrebdgzkkornkdnkljqn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396675.0732548-445-37506596078766/AnsiballZ_systemd.py'
Nov 29 06:11:15 compute-0 sudo[49203]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:11:15 compute-0 sshd-session[47836]: Connection closed by invalid user guest 193.32.162.157 port 47220 [preauth]
Nov 29 06:11:15 compute-0 python3.9[49205]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 06:11:15 compute-0 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Nov 29 06:11:15 compute-0 systemd[1]: Stopped Network Manager Wait Online.
Nov 29 06:11:15 compute-0 systemd[1]: Stopping Network Manager Wait Online...
Nov 29 06:11:15 compute-0 systemd[1]: Stopping Network Manager...
Nov 29 06:11:15 compute-0 NetworkManager[7189]: <info>  [1764396675.8036] caught SIGTERM, shutting down normally.
Nov 29 06:11:15 compute-0 NetworkManager[7189]: <info>  [1764396675.8061] dhcp4 (eth0): canceled DHCP transaction
Nov 29 06:11:15 compute-0 NetworkManager[7189]: <info>  [1764396675.8062] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 29 06:11:15 compute-0 NetworkManager[7189]: <info>  [1764396675.8062] dhcp4 (eth0): state changed no lease
Nov 29 06:11:15 compute-0 NetworkManager[7189]: <info>  [1764396675.8068] manager: NetworkManager state is now CONNECTED_SITE
Nov 29 06:11:15 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 29 06:11:15 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 29 06:11:16 compute-0 NetworkManager[7189]: <info>  [1764396676.0096] exiting (success)
Nov 29 06:11:16 compute-0 systemd[1]: NetworkManager.service: Deactivated successfully.
Nov 29 06:11:16 compute-0 systemd[1]: Stopped Network Manager.
Nov 29 06:11:16 compute-0 systemd[1]: NetworkManager.service: Consumed 13.394s CPU time, 4.1M memory peak, read 0B from disk, written 32.0K to disk.
Nov 29 06:11:16 compute-0 systemd[1]: Starting Network Manager...
Nov 29 06:11:16 compute-0 NetworkManager[49224]: <info>  [1764396676.0948] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:b7b17a39-22f5-4f4f-9861-b1bcbadcfe77)
Nov 29 06:11:16 compute-0 NetworkManager[49224]: <info>  [1764396676.0949] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Nov 29 06:11:16 compute-0 NetworkManager[49224]: <info>  [1764396676.1019] manager[0x55b4cec47090]: monitoring kernel firmware directory '/lib/firmware'.
Nov 29 06:11:16 compute-0 systemd[1]: Starting Hostname Service...
Nov 29 06:11:16 compute-0 systemd[1]: Started Hostname Service.
Nov 29 06:11:16 compute-0 NetworkManager[49224]: <info>  [1764396676.2211] hostname: hostname: using hostnamed
Nov 29 06:11:16 compute-0 NetworkManager[49224]: <info>  [1764396676.2212] hostname: static hostname changed from (none) to "compute-0"
Nov 29 06:11:16 compute-0 NetworkManager[49224]: <info>  [1764396676.2219] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Nov 29 06:11:16 compute-0 NetworkManager[49224]: <info>  [1764396676.2229] manager[0x55b4cec47090]: rfkill: Wi-Fi hardware radio set enabled
Nov 29 06:11:16 compute-0 NetworkManager[49224]: <info>  [1764396676.2229] manager[0x55b4cec47090]: rfkill: WWAN hardware radio set enabled
Nov 29 06:11:16 compute-0 NetworkManager[49224]: <info>  [1764396676.2262] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-ovs.so)
Nov 29 06:11:16 compute-0 NetworkManager[49224]: <info>  [1764396676.2277] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Nov 29 06:11:16 compute-0 NetworkManager[49224]: <info>  [1764396676.2278] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Nov 29 06:11:16 compute-0 NetworkManager[49224]: <info>  [1764396676.2279] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Nov 29 06:11:16 compute-0 NetworkManager[49224]: <info>  [1764396676.2280] manager: Networking is enabled by state file
Nov 29 06:11:16 compute-0 NetworkManager[49224]: <info>  [1764396676.2283] settings: Loaded settings plugin: keyfile (internal)
Nov 29 06:11:16 compute-0 NetworkManager[49224]: <info>  [1764396676.2289] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Nov 29 06:11:16 compute-0 NetworkManager[49224]: <info>  [1764396676.2330] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Nov 29 06:11:16 compute-0 NetworkManager[49224]: <info>  [1764396676.2342] dhcp: init: Using DHCP client 'internal'
Nov 29 06:11:16 compute-0 NetworkManager[49224]: <info>  [1764396676.2347] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Nov 29 06:11:16 compute-0 NetworkManager[49224]: <info>  [1764396676.2354] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 06:11:16 compute-0 NetworkManager[49224]: <info>  [1764396676.2362] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Nov 29 06:11:16 compute-0 NetworkManager[49224]: <info>  [1764396676.2374] device (lo): Activation: starting connection 'lo' (1e70ab37-1fe6-47fd-afad-f3ac90d7657d)
Nov 29 06:11:16 compute-0 NetworkManager[49224]: <info>  [1764396676.2383] device (eth0): carrier: link connected
Nov 29 06:11:16 compute-0 NetworkManager[49224]: <info>  [1764396676.2391] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Nov 29 06:11:16 compute-0 NetworkManager[49224]: <info>  [1764396676.2397] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Nov 29 06:11:16 compute-0 NetworkManager[49224]: <info>  [1764396676.2398] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 29 06:11:16 compute-0 NetworkManager[49224]: <info>  [1764396676.2407] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 29 06:11:16 compute-0 NetworkManager[49224]: <info>  [1764396676.2416] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 29 06:11:16 compute-0 NetworkManager[49224]: <info>  [1764396676.2424] device (eth1): carrier: link connected
Nov 29 06:11:16 compute-0 NetworkManager[49224]: <info>  [1764396676.2431] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Nov 29 06:11:16 compute-0 NetworkManager[49224]: <info>  [1764396676.2437] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (b3ca7565-e6c0-5ba2-a076-c2cd58810e8e) (indicated)
Nov 29 06:11:16 compute-0 NetworkManager[49224]: <info>  [1764396676.2438] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 29 06:11:16 compute-0 NetworkManager[49224]: <info>  [1764396676.2446] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 29 06:11:16 compute-0 NetworkManager[49224]: <info>  [1764396676.2457] device (eth1): Activation: starting connection 'ci-private-network' (b3ca7565-e6c0-5ba2-a076-c2cd58810e8e)
Nov 29 06:11:16 compute-0 systemd[1]: Started Network Manager.
Nov 29 06:11:16 compute-0 NetworkManager[49224]: <info>  [1764396676.2466] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Nov 29 06:11:16 compute-0 NetworkManager[49224]: <info>  [1764396676.2482] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Nov 29 06:11:16 compute-0 NetworkManager[49224]: <info>  [1764396676.2485] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Nov 29 06:11:16 compute-0 NetworkManager[49224]: <info>  [1764396676.2488] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Nov 29 06:11:16 compute-0 NetworkManager[49224]: <info>  [1764396676.2491] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 29 06:11:16 compute-0 NetworkManager[49224]: <info>  [1764396676.2495] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 29 06:11:16 compute-0 NetworkManager[49224]: <info>  [1764396676.2498] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 29 06:11:16 compute-0 NetworkManager[49224]: <info>  [1764396676.2502] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 29 06:11:16 compute-0 NetworkManager[49224]: <info>  [1764396676.2508] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Nov 29 06:11:16 compute-0 NetworkManager[49224]: <info>  [1764396676.2518] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 29 06:11:16 compute-0 NetworkManager[49224]: <info>  [1764396676.2523] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 29 06:11:16 compute-0 NetworkManager[49224]: <info>  [1764396676.2535] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 29 06:11:16 compute-0 NetworkManager[49224]: <info>  [1764396676.2554] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 29 06:11:16 compute-0 NetworkManager[49224]: <info>  [1764396676.2833] dhcp4 (eth0): state changed new lease, address=38.102.83.22
Nov 29 06:11:16 compute-0 NetworkManager[49224]: <info>  [1764396676.2843] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Nov 29 06:11:16 compute-0 systemd[1]: Starting Network Manager Wait Online...
Nov 29 06:11:16 compute-0 sudo[49203]: pam_unix(sudo:session): session closed for user root
Nov 29 06:11:16 compute-0 NetworkManager[49224]: <info>  [1764396676.4429] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 29 06:11:16 compute-0 NetworkManager[49224]: <info>  [1764396676.4444] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Nov 29 06:11:16 compute-0 NetworkManager[49224]: <info>  [1764396676.4447] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 29 06:11:16 compute-0 NetworkManager[49224]: <info>  [1764396676.4450] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Nov 29 06:11:16 compute-0 NetworkManager[49224]: <info>  [1764396676.4460] device (lo): Activation: successful, device activated.
Nov 29 06:11:16 compute-0 NetworkManager[49224]: <info>  [1764396676.4472] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 29 06:11:16 compute-0 NetworkManager[49224]: <info>  [1764396676.4477] manager: NetworkManager state is now CONNECTED_LOCAL
Nov 29 06:11:16 compute-0 NetworkManager[49224]: <info>  [1764396676.4484] device (eth1): Activation: successful, device activated.
Nov 29 06:11:16 compute-0 NetworkManager[49224]: <info>  [1764396676.4538] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 29 06:11:16 compute-0 NetworkManager[49224]: <info>  [1764396676.4541] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 29 06:11:16 compute-0 NetworkManager[49224]: <info>  [1764396676.4547] manager: NetworkManager state is now CONNECTED_SITE
Nov 29 06:11:16 compute-0 NetworkManager[49224]: <info>  [1764396676.4554] device (eth0): Activation: successful, device activated.
Nov 29 06:11:16 compute-0 NetworkManager[49224]: <info>  [1764396676.4564] manager: NetworkManager state is now CONNECTED_GLOBAL
Nov 29 06:11:16 compute-0 NetworkManager[49224]: <info>  [1764396676.4568] manager: startup complete
Nov 29 06:11:16 compute-0 systemd[1]: Finished Network Manager Wait Online.
Nov 29 06:11:17 compute-0 sudo[49430]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-moimxdzjtqbdfwtgzddgqcrgovlwtaqe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396677.2169907-469-53826323662921/AnsiballZ_dnf.py'
Nov 29 06:11:17 compute-0 sudo[49430]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:11:17 compute-0 python3.9[49432]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 06:11:22 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 29 06:11:22 compute-0 systemd[1]: Starting man-db-cache-update.service...
Nov 29 06:11:22 compute-0 systemd[1]: Reloading.
Nov 29 06:11:22 compute-0 systemd-rc-local-generator[49482]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 06:11:22 compute-0 systemd-sysv-generator[49486]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 06:11:22 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 29 06:11:23 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 29 06:11:23 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 29 06:11:23 compute-0 systemd[1]: run-r52c03e1367404518a3055f445798d2c3.service: Deactivated successfully.
Nov 29 06:11:23 compute-0 sudo[49430]: pam_unix(sudo:session): session closed for user root
Nov 29 06:11:26 compute-0 sshd-session[49766]: Received disconnect from 79.116.35.29 port 55756:11: Bye Bye [preauth]
Nov 29 06:11:26 compute-0 sshd-session[49766]: Disconnected from authenticating user root 79.116.35.29 port 55756 [preauth]
Nov 29 06:11:26 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 29 06:11:27 compute-0 sshd-session[49206]: Connection closed by authenticating user root 193.32.162.157 port 55382 [preauth]
Nov 29 06:11:29 compute-0 sudo[49896]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smsayhortwotfstwjpnwejfutllxauwt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396689.5909283-505-245440366686417/AnsiballZ_stat.py'
Nov 29 06:11:29 compute-0 sudo[49896]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:11:30 compute-0 python3.9[49898]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 06:11:30 compute-0 sudo[49896]: pam_unix(sudo:session): session closed for user root
Nov 29 06:11:30 compute-0 sudo[50048]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-crvsrljrkojmmzaifptjerfoqesztzak ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396690.4840612-532-40778456945265/AnsiballZ_ini_file.py'
Nov 29 06:11:30 compute-0 sudo[50048]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:11:31 compute-0 python3.9[50050]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:11:31 compute-0 sudo[50048]: pam_unix(sudo:session): session closed for user root
Nov 29 06:11:31 compute-0 sudo[50202]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-czhdiwcwljqqkzhxwzauhpuoduxsrjoq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396691.513583-562-203697971976315/AnsiballZ_ini_file.py'
Nov 29 06:11:31 compute-0 sudo[50202]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:11:32 compute-0 python3.9[50204]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:11:32 compute-0 sudo[50202]: pam_unix(sudo:session): session closed for user root
Nov 29 06:11:32 compute-0 sudo[50354]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lsnxjsvfmivkvfgswdmmbdzxnkuhfcmu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396692.2508636-562-201152794163105/AnsiballZ_ini_file.py'
Nov 29 06:11:32 compute-0 sudo[50354]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:11:32 compute-0 python3.9[50356]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:11:32 compute-0 sudo[50354]: pam_unix(sudo:session): session closed for user root
Nov 29 06:11:33 compute-0 sudo[50506]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yovkouiflnmlfxkakifzpangvcnlnucc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396693.2800162-607-57654687859512/AnsiballZ_ini_file.py'
Nov 29 06:11:33 compute-0 sudo[50506]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:11:33 compute-0 python3.9[50508]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:11:33 compute-0 sudo[50506]: pam_unix(sudo:session): session closed for user root
Nov 29 06:11:34 compute-0 sudo[50658]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rqdouocwfbplrgkrpshobbpipkstkwcs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396693.988057-607-263830276882523/AnsiballZ_ini_file.py'
Nov 29 06:11:34 compute-0 sudo[50658]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:11:34 compute-0 python3.9[50660]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:11:34 compute-0 sudo[50658]: pam_unix(sudo:session): session closed for user root
Nov 29 06:11:35 compute-0 sudo[50810]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdyhccuomujnadawmctrvlqgxlhxytpf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396694.7503397-652-106497898391310/AnsiballZ_stat.py'
Nov 29 06:11:35 compute-0 sudo[50810]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:11:35 compute-0 python3.9[50812]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:11:35 compute-0 sudo[50810]: pam_unix(sudo:session): session closed for user root
Nov 29 06:11:35 compute-0 sudo[50933]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bpoedlqyxpdwggrjdtqnenzzdxcvxpbs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396694.7503397-652-106497898391310/AnsiballZ_copy.py'
Nov 29 06:11:35 compute-0 sudo[50933]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:11:35 compute-0 sshd-session[49769]: Invalid user main from 193.32.162.157 port 57052
Nov 29 06:11:35 compute-0 python3.9[50935]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1764396694.7503397-652-106497898391310/.source _original_basename=.h8c69tcj follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:11:35 compute-0 sudo[50933]: pam_unix(sudo:session): session closed for user root
Nov 29 06:11:36 compute-0 sudo[51085]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gsebhncieuikjunnimmjfrfsdanmjnim ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396696.2465918-697-163528946289872/AnsiballZ_file.py'
Nov 29 06:11:36 compute-0 sudo[51085]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:11:36 compute-0 python3.9[51087]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:11:36 compute-0 sudo[51085]: pam_unix(sudo:session): session closed for user root
Nov 29 06:11:37 compute-0 sudo[51237]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ivweonfortyjcvxpnyywiwfbefvkowvv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396697.2208118-721-161941227216541/AnsiballZ_edpm_os_net_config_mappings.py'
Nov 29 06:11:37 compute-0 sudo[51237]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:11:37 compute-0 python3.9[51239]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Nov 29 06:11:37 compute-0 sudo[51237]: pam_unix(sudo:session): session closed for user root
Nov 29 06:11:38 compute-0 sshd-session[49769]: Connection closed by invalid user main 193.32.162.157 port 57052 [preauth]
Nov 29 06:11:38 compute-0 sudo[51389]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjomfrdlmxubtcddunaqskxdfvwqgfrr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396698.2389853-748-253123126759359/AnsiballZ_file.py'
Nov 29 06:11:38 compute-0 sudo[51389]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:11:38 compute-0 python3.9[51391]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:11:38 compute-0 sudo[51389]: pam_unix(sudo:session): session closed for user root
Nov 29 06:11:39 compute-0 sudo[51541]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rnainnyyrfgsctpvxxlvxpfaiaxwjkmk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396699.1717813-778-228515105138828/AnsiballZ_stat.py'
Nov 29 06:11:39 compute-0 sudo[51541]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:11:39 compute-0 sudo[51541]: pam_unix(sudo:session): session closed for user root
Nov 29 06:11:40 compute-0 sudo[51665]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bnhnqzzzlimrguhowbzghfxbmfwluzaa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396699.1717813-778-228515105138828/AnsiballZ_copy.py'
Nov 29 06:11:40 compute-0 sudo[51665]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:11:40 compute-0 sudo[51665]: pam_unix(sudo:session): session closed for user root
Nov 29 06:11:41 compute-0 sudo[51818]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzdyrmiwsdcqedmmluuuaycckpvoqiec ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396700.615979-823-14624059892488/AnsiballZ_slurp.py'
Nov 29 06:11:41 compute-0 sudo[51818]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:11:41 compute-0 python3.9[51820]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Nov 29 06:11:41 compute-0 sudo[51818]: pam_unix(sudo:session): session closed for user root
Nov 29 06:11:42 compute-0 sudo[51993]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zzhyxqelbxejfsivibjlppdmqwvdwomo ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396701.7859013-850-70372614563925/async_wrapper.py j211298114970 300 /home/zuul/.ansible/tmp/ansible-tmp-1764396701.7859013-850-70372614563925/AnsiballZ_edpm_os_net_config.py _'
Nov 29 06:11:42 compute-0 sudo[51993]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:11:42 compute-0 ansible-async_wrapper.py[51995]: Invoked with j211298114970 300 /home/zuul/.ansible/tmp/ansible-tmp-1764396701.7859013-850-70372614563925/AnsiballZ_edpm_os_net_config.py _
Nov 29 06:11:42 compute-0 ansible-async_wrapper.py[51998]: Starting module and watcher
Nov 29 06:11:42 compute-0 ansible-async_wrapper.py[51998]: Start watching 51999 (300)
Nov 29 06:11:42 compute-0 ansible-async_wrapper.py[51999]: Start module (51999)
Nov 29 06:11:42 compute-0 ansible-async_wrapper.py[51995]: Return async_wrapper task started.
Nov 29 06:11:42 compute-0 sudo[51993]: pam_unix(sudo:session): session closed for user root
Nov 29 06:11:42 compute-0 python3.9[52000]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Nov 29 06:11:43 compute-0 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Nov 29 06:11:43 compute-0 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Nov 29 06:11:43 compute-0 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Nov 29 06:11:43 compute-0 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Nov 29 06:11:43 compute-0 kernel: cfg80211: failed to load regulatory.db
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.0838] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=52001 uid=0 result="success"
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.0861] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=52001 uid=0 result="success"
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.1507] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.1509] audit: op="connection-add" uuid="eeac5863-66ee-4b3f-bf7f-c02d23c041db" name="br-ex-br" pid=52001 uid=0 result="success"
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.1526] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.1527] audit: op="connection-add" uuid="27fc12c3-9aac-4dc3-8080-14921a438ebd" name="br-ex-port" pid=52001 uid=0 result="success"
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.1544] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.1546] audit: op="connection-add" uuid="fe1178fc-2e29-4419-8399-354dc28e3b2c" name="eth1-port" pid=52001 uid=0 result="success"
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.1561] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.1562] audit: op="connection-add" uuid="6012ad6e-71c5-48a9-9c01-3870d9361158" name="vlan20-port" pid=52001 uid=0 result="success"
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.1576] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.1577] audit: op="connection-add" uuid="808ffa2a-001b-45da-ae40-c67f83a923a5" name="vlan21-port" pid=52001 uid=0 result="success"
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.1592] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.1593] audit: op="connection-add" uuid="952dc223-7473-4a72-a39e-de9d203b944f" name="vlan22-port" pid=52001 uid=0 result="success"
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.1605] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.1607] audit: op="connection-add" uuid="58b0e596-4e75-4486-bab7-ad59ffc2a5e8" name="vlan23-port" pid=52001 uid=0 result="success"
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.1627] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="ipv6.method,ipv6.dhcp-timeout,ipv6.addr-gen-mode,connection.autoconnect-priority,connection.timestamp,802-3-ethernet.mtu,ipv4.dhcp-timeout,ipv4.dhcp-client-id" pid=52001 uid=0 result="success"
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.1646] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.1648] audit: op="connection-add" uuid="10b840ee-e2b5-4908-8c0e-b2ae3a1e1dbf" name="br-ex-if" pid=52001 uid=0 result="success"
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.4187] audit: op="connection-update" uuid="b3ca7565-e6c0-5ba2-a076-c2cd58810e8e" name="ci-private-network" args="ovs-external-ids.data,ipv6.dns,ipv6.method,ipv6.addresses,ipv6.routes,ipv6.addr-gen-mode,ipv6.routing-rules,connection.slave-type,connection.port-type,connection.controller,connection.master,connection.timestamp,ipv4.dns,ipv4.method,ipv4.addresses,ipv4.never-default,ipv4.routes,ipv4.routing-rules,ovs-interface.type" pid=52001 uid=0 result="success"
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.4220] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.4222] audit: op="connection-add" uuid="fd9d2d91-4934-4b8f-a318-cbe602a2ac38" name="vlan20-if" pid=52001 uid=0 result="success"
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.4252] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.4254] audit: op="connection-add" uuid="1cd3f48f-8572-46f4-8849-79769c7469fe" name="vlan21-if" pid=52001 uid=0 result="success"
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.4284] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.4286] audit: op="connection-add" uuid="b38ee61b-bc8a-4e5f-a666-ec49f7e18104" name="vlan22-if" pid=52001 uid=0 result="success"
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.4313] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.4316] audit: op="connection-add" uuid="7139b4eb-3cbc-4ea8-8191-e076d0c1b71d" name="vlan23-if" pid=52001 uid=0 result="success"
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.4335] audit: op="connection-delete" uuid="ca3faf74-3a1e-393e-b2c9-9f72990abe6a" name="Wired connection 1" pid=52001 uid=0 result="success"
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.4356] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.4372] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.4377] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (eeac5863-66ee-4b3f-bf7f-c02d23c041db)
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.4378] audit: op="connection-activate" uuid="eeac5863-66ee-4b3f-bf7f-c02d23c041db" name="br-ex-br" pid=52001 uid=0 result="success"
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.4382] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.4394] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.4409] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (27fc12c3-9aac-4dc3-8080-14921a438ebd)
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.4412] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.4424] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.4432] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (fe1178fc-2e29-4419-8399-354dc28e3b2c)
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.4435] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.4446] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.4453] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (6012ad6e-71c5-48a9-9c01-3870d9361158)
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.4455] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.4466] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.4472] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (808ffa2a-001b-45da-ae40-c67f83a923a5)
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.4475] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.4485] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.4494] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (952dc223-7473-4a72-a39e-de9d203b944f)
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.4496] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.4507] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.4513] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (58b0e596-4e75-4486-bab7-ad59ffc2a5e8)
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.4515] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.4519] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.4522] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.4532] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.4539] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.4545] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (10b840ee-e2b5-4908-8c0e-b2ae3a1e1dbf)
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.4546] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.4551] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.4554] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.4555] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.4557] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.4574] device (eth1): disconnecting for new activation request.
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.4575] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.4580] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.4584] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.4586] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.4591] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.4599] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.4606] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (fd9d2d91-4934-4b8f-a318-cbe602a2ac38)
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.4607] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.4612] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.4615] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.4617] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.4622] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.4629] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.4636] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (1cd3f48f-8572-46f4-8849-79769c7469fe)
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.4637] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.4643] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.4647] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.4650] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.4656] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.4665] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.4674] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (b38ee61b-bc8a-4e5f-a666-ec49f7e18104)
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.4675] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.4680] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.4683] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.4685] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.4689] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.4695] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.4701] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (7139b4eb-3cbc-4ea8-8191-e076d0c1b71d)
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.4703] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.4707] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.4712] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.4714] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.4717] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.4744] audit: op="device-reapply" interface="eth0" ifindex=2 args="ipv6.method,ipv6.addr-gen-mode,connection.autoconnect-priority,802-3-ethernet.mtu,ipv4.dhcp-timeout,ipv4.dhcp-client-id" pid=52001 uid=0 result="success"
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.4747] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.4752] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.4755] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.4767] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.4774] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.4779] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.4783] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.4786] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.4794] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.4801] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.4806] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.4808] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.4816] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.4824] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.4829] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.4832] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.4841] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.4848] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.4852] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.4853] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.4859] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.4863] dhcp4 (eth0): canceled DHCP transaction
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.4863] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.4863] dhcp4 (eth0): state changed no lease
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.4865] dhcp4 (eth0): activation: beginning transaction (no timeout)
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.4877] audit: op="device-reapply" interface="eth1" ifindex=3 pid=52001 uid=0 result="fail" reason="Device is not activated"
Nov 29 06:11:45 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 29 06:11:45 compute-0 kernel: ovs-system: entered promiscuous mode
Nov 29 06:11:45 compute-0 systemd-udevd[52005]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 06:11:45 compute-0 kernel: Timeout policy base is empty
Nov 29 06:11:45 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.5735] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.5741] dhcp4 (eth0): state changed new lease, address=38.102.83.22
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.5752] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Nov 29 06:11:45 compute-0 kernel: br-ex: entered promiscuous mode
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.5801] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Nov 29 06:11:45 compute-0 kernel: vlan20: entered promiscuous mode
Nov 29 06:11:45 compute-0 systemd-udevd[52007]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 06:11:45 compute-0 kernel: vlan21: entered promiscuous mode
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.5917] device (eth1): Activation: starting connection 'ci-private-network' (b3ca7565-e6c0-5ba2-a076-c2cd58810e8e)
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.5927] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.5929] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.5930] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.5931] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.5933] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.5934] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.5935] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.5941] device (eth1): state change: disconnected -> deactivating (reason 'new-activation', managed-type: 'full')
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.5946] device (eth1): disconnecting for new activation request.
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.5947] audit: op="connection-activate" uuid="b3ca7565-e6c0-5ba2-a076-c2cd58810e8e" name="ci-private-network" pid=52001 uid=0 result="success"
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.5951] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.5957] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.5963] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.5969] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.5976] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.5979] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.5983] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.5986] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.5990] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.6003] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.6007] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.6010] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 06:11:45 compute-0 kernel: vlan22: entered promiscuous mode
Nov 29 06:11:45 compute-0 systemd-udevd[52006]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.6020] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.6025] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.6030] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.6038] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.6051] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.6070] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.6079] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.6082] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.6083] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.6089] device (eth1): Activation: starting connection 'ci-private-network' (b3ca7565-e6c0-5ba2-a076-c2cd58810e8e)
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.6092] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=52001 uid=0 result="success"
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.6116] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.6121] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.6129] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.6145] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 06:11:45 compute-0 kernel: vlan23: entered promiscuous mode
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.6154] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.6156] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.6161] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 06:11:45 compute-0 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.6181] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.6188] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.6192] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.6206] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.6212] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.6221] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.6228] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.6230] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.6232] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.6237] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.6243] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.6247] device (eth1): Activation: successful, device activated.
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.6252] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.6259] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.6263] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.6268] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.6273] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.6277] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.6284] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.6298] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.7726] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.7734] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 06:11:45 compute-0 NetworkManager[49224]: <info>  [1764396705.7747] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 29 06:11:46 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 29 06:11:46 compute-0 sudo[52363]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fbzewerjhhkzczfhhmzpcwtehrndrlqk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396705.9108815-850-212977823475389/AnsiballZ_async_status.py'
Nov 29 06:11:46 compute-0 sudo[52363]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:11:46 compute-0 python3.9[52365]: ansible-ansible.legacy.async_status Invoked with jid=j211298114970.51995 mode=status _async_dir=/root/.ansible_async
Nov 29 06:11:46 compute-0 sudo[52363]: pam_unix(sudo:session): session closed for user root
Nov 29 06:11:47 compute-0 NetworkManager[49224]: <info>  [1764396707.2615] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=52001 uid=0 result="success"
Nov 29 06:11:47 compute-0 NetworkManager[49224]: <info>  [1764396707.4813] checkpoint[0x55b4cec1d950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Nov 29 06:11:47 compute-0 NetworkManager[49224]: <info>  [1764396707.4817] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=52001 uid=0 result="success"
Nov 29 06:11:47 compute-0 ansible-async_wrapper.py[51998]: 51999 still running (300)
Nov 29 06:11:48 compute-0 NetworkManager[49224]: <info>  [1764396708.0159] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=52001 uid=0 result="success"
Nov 29 06:11:48 compute-0 NetworkManager[49224]: <info>  [1764396708.0179] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=52001 uid=0 result="success"
Nov 29 06:11:48 compute-0 NetworkManager[49224]: <info>  [1764396708.4323] audit: op="networking-control" arg="global-dns-configuration" pid=52001 uid=0 result="success"
Nov 29 06:11:48 compute-0 NetworkManager[49224]: <info>  [1764396708.4386] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Nov 29 06:11:48 compute-0 NetworkManager[49224]: <info>  [1764396708.4422] audit: op="networking-control" arg="global-dns-configuration" pid=52001 uid=0 result="success"
Nov 29 06:11:48 compute-0 NetworkManager[49224]: <info>  [1764396708.4446] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=52001 uid=0 result="success"
Nov 29 06:11:48 compute-0 NetworkManager[49224]: <info>  [1764396708.7010] checkpoint[0x55b4cec1da20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Nov 29 06:11:48 compute-0 NetworkManager[49224]: <info>  [1764396708.7018] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=52001 uid=0 result="success"
Nov 29 06:11:48 compute-0 ansible-async_wrapper.py[51999]: Module complete (51999)
Nov 29 06:11:49 compute-0 sshd-session[51567]: Connection closed by authenticating user root 193.32.162.157 port 54804 [preauth]
Nov 29 06:11:49 compute-0 sudo[52471]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zetiembswrqlueukixynzzaailtratkb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396705.9108815-850-212977823475389/AnsiballZ_async_status.py'
Nov 29 06:11:49 compute-0 sudo[52471]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:11:50 compute-0 sshd-session[52373]: Invalid user ftpadmin from 104.208.108.166 port 3774
Nov 29 06:11:50 compute-0 python3.9[52473]: ansible-ansible.legacy.async_status Invoked with jid=j211298114970.51995 mode=status _async_dir=/root/.ansible_async
Nov 29 06:11:50 compute-0 sudo[52471]: pam_unix(sudo:session): session closed for user root
Nov 29 06:11:50 compute-0 sshd-session[52373]: Received disconnect from 104.208.108.166 port 3774:11: Bye Bye [preauth]
Nov 29 06:11:50 compute-0 sshd-session[52373]: Disconnected from invalid user ftpadmin 104.208.108.166 port 3774 [preauth]
Nov 29 06:11:50 compute-0 sudo[52572]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pnssydhtglftmguklspnmpanveyuqdqh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396705.9108815-850-212977823475389/AnsiballZ_async_status.py'
Nov 29 06:11:50 compute-0 sudo[52572]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:11:50 compute-0 python3.9[52574]: ansible-ansible.legacy.async_status Invoked with jid=j211298114970.51995 mode=cleanup _async_dir=/root/.ansible_async
Nov 29 06:11:50 compute-0 sudo[52572]: pam_unix(sudo:session): session closed for user root
Nov 29 06:11:51 compute-0 sudo[52724]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ybnhlfxnppeiujbzxlngdxfpyaebkyiz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396711.0545173-931-166419454236344/AnsiballZ_stat.py'
Nov 29 06:11:51 compute-0 sudo[52724]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:11:51 compute-0 python3.9[52726]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:11:51 compute-0 sudo[52724]: pam_unix(sudo:session): session closed for user root
Nov 29 06:11:51 compute-0 sudo[52847]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-epsswuxbbgerlglbejudjbzdodtrowce ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396711.0545173-931-166419454236344/AnsiballZ_copy.py'
Nov 29 06:11:51 compute-0 sudo[52847]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:11:52 compute-0 python3.9[52849]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764396711.0545173-931-166419454236344/.source.returncode _original_basename=.8r17aj_q follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:11:52 compute-0 sudo[52847]: pam_unix(sudo:session): session closed for user root
Nov 29 06:11:52 compute-0 ansible-async_wrapper.py[51998]: Done in kid B.
Nov 29 06:11:52 compute-0 sudo[53000]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kkjqqfpojcplbmgniehmubwjakpnknke ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396712.5663388-979-162472533845544/AnsiballZ_stat.py'
Nov 29 06:11:52 compute-0 sudo[53000]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:11:53 compute-0 python3.9[53002]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:11:53 compute-0 sudo[53000]: pam_unix(sudo:session): session closed for user root
Nov 29 06:11:53 compute-0 sudo[53124]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jlxuwzumbuwefcipzocmcxmtndcgqyqa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396712.5663388-979-162472533845544/AnsiballZ_copy.py'
Nov 29 06:11:53 compute-0 sudo[53124]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:11:53 compute-0 python3.9[53126]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764396712.5663388-979-162472533845544/.source.cfg _original_basename=.9ikxmu67 follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:11:53 compute-0 sudo[53124]: pam_unix(sudo:session): session closed for user root
Nov 29 06:11:54 compute-0 sudo[53276]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wjqacrthhsgybeqqhrcwklaoomgnaobb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396714.2872279-1024-28353118516007/AnsiballZ_systemd.py'
Nov 29 06:11:54 compute-0 sudo[53276]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:11:54 compute-0 python3.9[53278]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 06:11:55 compute-0 systemd[1]: Reloading Network Manager...
Nov 29 06:11:55 compute-0 NetworkManager[49224]: <info>  [1764396715.0725] audit: op="reload" arg="0" pid=53282 uid=0 result="success"
Nov 29 06:11:55 compute-0 NetworkManager[49224]: <info>  [1764396715.0737] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Nov 29 06:11:55 compute-0 systemd[1]: Reloaded Network Manager.
Nov 29 06:11:55 compute-0 sudo[53276]: pam_unix(sudo:session): session closed for user root
Nov 29 06:11:55 compute-0 sshd-session[45206]: Connection closed by 192.168.122.30 port 57066
Nov 29 06:11:55 compute-0 sshd-session[45203]: pam_unix(sshd:session): session closed for user zuul
Nov 29 06:11:55 compute-0 systemd[1]: session-10.scope: Deactivated successfully.
Nov 29 06:11:55 compute-0 systemd[1]: session-10.scope: Consumed 54.306s CPU time.
Nov 29 06:11:55 compute-0 systemd-logind[797]: Session 10 logged out. Waiting for processes to exit.
Nov 29 06:11:55 compute-0 systemd-logind[797]: Removed session 10.
Nov 29 06:11:56 compute-0 sshd-session[53311]: Received disconnect from 138.124.186.225 port 34242:11: Bye Bye [preauth]
Nov 29 06:11:56 compute-0 sshd-session[53311]: Disconnected from authenticating user root 138.124.186.225 port 34242 [preauth]
Nov 29 06:11:59 compute-0 sshd-session[52474]: Invalid user sonar from 193.32.162.157 port 39398
Nov 29 06:12:00 compute-0 sshd-session[53317]: Accepted publickey for zuul from 192.168.122.30 port 57566 ssh2: ECDSA SHA256:q0RMlXdalxA6snNWza7TmIndlwLWLLpO+sXhiGKqO/I
Nov 29 06:12:00 compute-0 systemd-logind[797]: New session 11 of user zuul.
Nov 29 06:12:00 compute-0 systemd[1]: Started Session 11 of User zuul.
Nov 29 06:12:00 compute-0 sshd-session[53315]: Invalid user alma from 31.6.212.12 port 38672
Nov 29 06:12:00 compute-0 sshd-session[53317]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 06:12:00 compute-0 sshd-session[53315]: Received disconnect from 31.6.212.12 port 38672:11: Bye Bye [preauth]
Nov 29 06:12:00 compute-0 sshd-session[53315]: Disconnected from invalid user alma 31.6.212.12 port 38672 [preauth]
Nov 29 06:12:01 compute-0 sshd-session[52474]: Connection closed by invalid user sonar 193.32.162.157 port 39398 [preauth]
Nov 29 06:12:01 compute-0 python3.9[53470]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 06:12:02 compute-0 python3.9[53624]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 06:12:05 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 29 06:12:05 compute-0 python3.9[53818]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:12:06 compute-0 sshd-session[53320]: Connection closed by 192.168.122.30 port 57566
Nov 29 06:12:06 compute-0 sshd-session[53317]: pam_unix(sshd:session): session closed for user zuul
Nov 29 06:12:06 compute-0 systemd[1]: session-11.scope: Deactivated successfully.
Nov 29 06:12:06 compute-0 systemd[1]: session-11.scope: Consumed 2.826s CPU time.
Nov 29 06:12:06 compute-0 systemd-logind[797]: Session 11 logged out. Waiting for processes to exit.
Nov 29 06:12:06 compute-0 systemd-logind[797]: Removed session 11.
Nov 29 06:12:11 compute-0 sshd-session[53848]: Accepted publickey for zuul from 192.168.122.30 port 51492 ssh2: ECDSA SHA256:q0RMlXdalxA6snNWza7TmIndlwLWLLpO+sXhiGKqO/I
Nov 29 06:12:11 compute-0 systemd-logind[797]: New session 12 of user zuul.
Nov 29 06:12:11 compute-0 systemd[1]: Started Session 12 of User zuul.
Nov 29 06:12:11 compute-0 sshd-session[53848]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 06:12:13 compute-0 python3.9[54002]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 06:12:14 compute-0 python3.9[54156]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 06:12:15 compute-0 sudo[54310]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wpublmnlsegoidbkusoetuwveissctfi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396734.8571224-85-177982604964452/AnsiballZ_setup.py'
Nov 29 06:12:15 compute-0 sudo[54310]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:12:15 compute-0 python3.9[54312]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 06:12:15 compute-0 sudo[54310]: pam_unix(sudo:session): session closed for user root
Nov 29 06:12:16 compute-0 sudo[54395]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cfiprgagoltquxihqkulhutchonpwucl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396734.8571224-85-177982604964452/AnsiballZ_dnf.py'
Nov 29 06:12:16 compute-0 sudo[54395]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:12:16 compute-0 python3.9[54397]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 06:12:17 compute-0 sudo[54395]: pam_unix(sudo:session): session closed for user root
Nov 29 06:12:18 compute-0 sudo[54548]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ftpcdemmhtorxrspkkfkdmlssjufkvjp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396738.1905167-121-262725990161810/AnsiballZ_setup.py'
Nov 29 06:12:18 compute-0 sudo[54548]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:12:18 compute-0 python3.9[54550]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 06:12:19 compute-0 sudo[54548]: pam_unix(sudo:session): session closed for user root
Nov 29 06:12:20 compute-0 sshd-session[54551]: Received disconnect from 103.147.159.91 port 51856:11: Bye Bye [preauth]
Nov 29 06:12:20 compute-0 sshd-session[54551]: Disconnected from authenticating user root 103.147.159.91 port 51856 [preauth]
Nov 29 06:12:21 compute-0 sudo[54745]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ymyyprtyrpqjmxkflaswbvdcufhyrzjw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396740.558316-154-165783473334054/AnsiballZ_file.py'
Nov 29 06:12:21 compute-0 sudo[54745]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:12:21 compute-0 python3.9[54747]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:12:21 compute-0 sudo[54745]: pam_unix(sudo:session): session closed for user root
Nov 29 06:12:21 compute-0 sudo[54897]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sjrezimcsvpvzorkpcyujwwdiueozwvw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396741.4430363-178-63818114398675/AnsiballZ_command.py'
Nov 29 06:12:21 compute-0 sudo[54897]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:12:22 compute-0 python3.9[54899]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:12:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat3680912017-merged.mount: Deactivated successfully.
Nov 29 06:12:22 compute-0 podman[54900]: 2025-11-29 06:12:22.315718116 +0000 UTC m=+0.169982416 system refresh
Nov 29 06:12:22 compute-0 sudo[54897]: pam_unix(sudo:session): session closed for user root
Nov 29 06:12:23 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 06:12:24 compute-0 sshd-session[54936]: Received disconnect from 193.46.255.217 port 24908:11:  [preauth]
Nov 29 06:12:24 compute-0 sshd-session[54936]: Disconnected from authenticating user root 193.46.255.217 port 24908 [preauth]
Nov 29 06:12:24 compute-0 sudo[55063]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nvefuaaxlkjlnyhbnrbfatcuygtoftqr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396744.1048543-202-156631681591304/AnsiballZ_stat.py'
Nov 29 06:12:24 compute-0 sudo[55063]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:12:24 compute-0 python3.9[55065]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:12:24 compute-0 sudo[55063]: pam_unix(sudo:session): session closed for user root
Nov 29 06:12:25 compute-0 sudo[55186]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cxhcenpowxzcxyelwidufvfqazogfuor ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396744.1048543-202-156631681591304/AnsiballZ_copy.py'
Nov 29 06:12:25 compute-0 sudo[55186]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:12:25 compute-0 python3.9[55188]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764396744.1048543-202-156631681591304/.source.json follow=False _original_basename=podman_network_config.j2 checksum=fb1097d0bfd110220a1faf17a72ee335f2fbc0a1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:12:25 compute-0 sudo[55186]: pam_unix(sudo:session): session closed for user root
Nov 29 06:12:26 compute-0 sudo[55338]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-scnakakhqlsqfgxlnbvmngxhbrgzpljk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396745.7648883-247-221858944286329/AnsiballZ_stat.py'
Nov 29 06:12:26 compute-0 sudo[55338]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:12:26 compute-0 python3.9[55340]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:12:26 compute-0 sudo[55338]: pam_unix(sudo:session): session closed for user root
Nov 29 06:12:27 compute-0 sudo[55461]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ikxdqslppndtdchukcuunyvqzkukkxrd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396745.7648883-247-221858944286329/AnsiballZ_copy.py'
Nov 29 06:12:27 compute-0 sudo[55461]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:12:27 compute-0 python3.9[55463]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764396745.7648883-247-221858944286329/.source.conf follow=False _original_basename=registries.conf.j2 checksum=25aa6c560e50dcbd81b989ea46a7865cb55b8998 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 29 06:12:27 compute-0 sudo[55461]: pam_unix(sudo:session): session closed for user root
Nov 29 06:12:27 compute-0 sudo[55613]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jxtedhxgwmqfqajtpiziwuecplilmzuf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396747.4754257-295-31497750386632/AnsiballZ_ini_file.py'
Nov 29 06:12:27 compute-0 sudo[55613]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:12:28 compute-0 python3.9[55615]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 29 06:12:28 compute-0 sudo[55613]: pam_unix(sudo:session): session closed for user root
Nov 29 06:12:28 compute-0 sudo[55765]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xbmvfkojbxsvlasnkyniwvstgznjmohz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396748.3278677-295-60413997176246/AnsiballZ_ini_file.py'
Nov 29 06:12:28 compute-0 sudo[55765]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:12:28 compute-0 python3.9[55767]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 29 06:12:28 compute-0 sudo[55765]: pam_unix(sudo:session): session closed for user root
Nov 29 06:12:29 compute-0 sudo[55919]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mosobshqzqnrlzfivjtqgahpugduqypq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396749.1541252-295-278189676656198/AnsiballZ_ini_file.py'
Nov 29 06:12:29 compute-0 sudo[55919]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:12:29 compute-0 sshd-session[55768]: Invalid user hadoop from 79.116.35.29 port 55074
Nov 29 06:12:29 compute-0 python3.9[55921]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 29 06:12:29 compute-0 sudo[55919]: pam_unix(sudo:session): session closed for user root
Nov 29 06:12:29 compute-0 sshd-session[55768]: Received disconnect from 79.116.35.29 port 55074:11: Bye Bye [preauth]
Nov 29 06:12:29 compute-0 sshd-session[55768]: Disconnected from invalid user hadoop 79.116.35.29 port 55074 [preauth]
Nov 29 06:12:30 compute-0 sudo[56071]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lazldmwujtimbcamhdoxzycfydqtntet ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396749.8524005-295-269716611330486/AnsiballZ_ini_file.py'
Nov 29 06:12:30 compute-0 sudo[56071]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:12:30 compute-0 python3.9[56073]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 29 06:12:30 compute-0 sudo[56071]: pam_unix(sudo:session): session closed for user root
Nov 29 06:12:31 compute-0 sudo[56223]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-imoiargnjblzzbeioinhczwvsgcezujv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396750.840696-388-244690067338947/AnsiballZ_dnf.py'
Nov 29 06:12:31 compute-0 sudo[56223]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:12:31 compute-0 python3.9[56225]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 06:12:32 compute-0 sudo[56223]: pam_unix(sudo:session): session closed for user root
Nov 29 06:12:33 compute-0 sudo[56376]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pqgcqakdsmnsfdyhtdcgjucanqdjhcra ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396753.2414403-421-81873953940501/AnsiballZ_setup.py'
Nov 29 06:12:33 compute-0 sudo[56376]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:12:33 compute-0 python3.9[56378]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 06:12:33 compute-0 sudo[56376]: pam_unix(sudo:session): session closed for user root
Nov 29 06:12:34 compute-0 sudo[56530]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tylwmlpsrcmxaxbhxzzltxctineohyag ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396754.1739662-445-118396645143029/AnsiballZ_stat.py'
Nov 29 06:12:34 compute-0 sudo[56530]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:12:35 compute-0 python3.9[56532]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 06:12:35 compute-0 sudo[56530]: pam_unix(sudo:session): session closed for user root
Nov 29 06:12:35 compute-0 sudo[56682]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cdysbnzkrtrlkjflkteronmqmjmzxgbw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396755.3424528-472-178092356100563/AnsiballZ_stat.py'
Nov 29 06:12:35 compute-0 sudo[56682]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:12:35 compute-0 python3.9[56684]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 06:12:35 compute-0 sudo[56682]: pam_unix(sudo:session): session closed for user root
Nov 29 06:12:36 compute-0 sudo[56834]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-guxwwvnirzdazjqpqqbtjfcthlwnaysy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396756.2870128-502-266434297534645/AnsiballZ_command.py'
Nov 29 06:12:36 compute-0 sudo[56834]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:12:36 compute-0 python3.9[56836]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:12:36 compute-0 sudo[56834]: pam_unix(sudo:session): session closed for user root
Nov 29 06:12:37 compute-0 sudo[56987]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ebjjnptvaffphsfqlyrmvxletargnoev ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396757.3261578-532-199179200573307/AnsiballZ_service_facts.py'
Nov 29 06:12:37 compute-0 sudo[56987]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:12:38 compute-0 python3.9[56989]: ansible-service_facts Invoked
Nov 29 06:12:38 compute-0 network[57006]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 29 06:12:38 compute-0 network[57007]: 'network-scripts' will be removed from distribution in near future.
Nov 29 06:12:38 compute-0 network[57008]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 29 06:12:42 compute-0 sudo[56987]: pam_unix(sudo:session): session closed for user root
Nov 29 06:12:43 compute-0 sudo[57291]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhaiotsmohnbcoxibtmnqbftodajwkra ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1764396763.491019-577-250112091206939/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1764396763.491019-577-250112091206939/args'
Nov 29 06:12:43 compute-0 sudo[57291]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:12:44 compute-0 sudo[57291]: pam_unix(sudo:session): session closed for user root
Nov 29 06:12:44 compute-0 sudo[57458]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vmtowcisalpmpulgqljdbvipnjkxkbla ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396764.401624-610-10201085338850/AnsiballZ_dnf.py'
Nov 29 06:12:44 compute-0 sudo[57458]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:12:44 compute-0 python3.9[57460]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 06:12:46 compute-0 sudo[57458]: pam_unix(sudo:session): session closed for user root
Nov 29 06:12:48 compute-0 sudo[57611]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ofakcofkadqhntbzfjpvuhoxrkbqpdnl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396767.8403487-649-250721073416462/AnsiballZ_package_facts.py'
Nov 29 06:12:48 compute-0 sudo[57611]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:12:48 compute-0 python3.9[57613]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Nov 29 06:12:49 compute-0 sudo[57611]: pam_unix(sudo:session): session closed for user root
Nov 29 06:12:50 compute-0 sudo[57763]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dzgvuiwjvrurtvucaxjhhveiudzhtgtb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396769.7853591-679-72706364777051/AnsiballZ_stat.py'
Nov 29 06:12:50 compute-0 sudo[57763]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:12:50 compute-0 python3.9[57765]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:12:50 compute-0 sudo[57763]: pam_unix(sudo:session): session closed for user root
Nov 29 06:12:50 compute-0 sudo[57888]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sjmxltulesinuvfmjsfsvrutsrhpfygo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396769.7853591-679-72706364777051/AnsiballZ_copy.py'
Nov 29 06:12:50 compute-0 sudo[57888]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:12:51 compute-0 python3.9[57890]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764396769.7853591-679-72706364777051/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:12:51 compute-0 sudo[57888]: pam_unix(sudo:session): session closed for user root
Nov 29 06:12:51 compute-0 sudo[58042]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lzdvldixozhcfseznpqacjbjwvuajzjv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396771.4156015-724-97859955955477/AnsiballZ_stat.py'
Nov 29 06:12:51 compute-0 sudo[58042]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:12:51 compute-0 python3.9[58044]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:12:51 compute-0 sudo[58042]: pam_unix(sudo:session): session closed for user root
Nov 29 06:12:52 compute-0 sudo[58167]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bgzexsdkiedpujiyokbqldvcdhrbadqq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396771.4156015-724-97859955955477/AnsiballZ_copy.py'
Nov 29 06:12:52 compute-0 sudo[58167]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:12:52 compute-0 python3.9[58169]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764396771.4156015-724-97859955955477/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:12:52 compute-0 sudo[58167]: pam_unix(sudo:session): session closed for user root
Nov 29 06:12:54 compute-0 sudo[58321]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nvtcropvyiqyajuytxiqtkrcajyaywvi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396773.67267-787-213986714972101/AnsiballZ_lineinfile.py'
Nov 29 06:12:54 compute-0 sudo[58321]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:12:54 compute-0 python3.9[58323]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:12:54 compute-0 sudo[58321]: pam_unix(sudo:session): session closed for user root
Nov 29 06:12:55 compute-0 sshd-session[58350]: Invalid user admin123 from 138.124.186.225 port 46980
Nov 29 06:12:55 compute-0 sshd-session[58350]: Received disconnect from 138.124.186.225 port 46980:11: Bye Bye [preauth]
Nov 29 06:12:55 compute-0 sshd-session[58350]: Disconnected from invalid user admin123 138.124.186.225 port 46980 [preauth]
Nov 29 06:12:55 compute-0 sudo[58477]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hmeuxzlktqrnawbotzeqhipbkskpnyyj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396775.603238-832-139228104459303/AnsiballZ_setup.py'
Nov 29 06:12:55 compute-0 sudo[58477]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:12:56 compute-0 python3.9[58479]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 06:12:56 compute-0 sudo[58477]: pam_unix(sudo:session): session closed for user root
Nov 29 06:12:57 compute-0 sudo[58561]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jzhsdarwnefywcsxyszepxymwxkdvihn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396775.603238-832-139228104459303/AnsiballZ_systemd.py'
Nov 29 06:12:57 compute-0 sudo[58561]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:12:57 compute-0 python3.9[58563]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 06:12:57 compute-0 sudo[58561]: pam_unix(sudo:session): session closed for user root
Nov 29 06:12:58 compute-0 sudo[58715]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ygwdyjdkqrxmbhphvxdvaigslvnupaft ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396778.440723-880-155429935481136/AnsiballZ_setup.py'
Nov 29 06:12:58 compute-0 sudo[58715]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:12:59 compute-0 python3.9[58717]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 06:12:59 compute-0 sudo[58715]: pam_unix(sudo:session): session closed for user root
Nov 29 06:12:59 compute-0 sudo[58799]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zuypfewdspctsritjrydsncyopotpkkz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396778.440723-880-155429935481136/AnsiballZ_systemd.py'
Nov 29 06:12:59 compute-0 sudo[58799]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:12:59 compute-0 python3.9[58801]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 06:12:59 compute-0 chronyd[800]: chronyd exiting
Nov 29 06:12:59 compute-0 systemd[1]: Stopping NTP client/server...
Nov 29 06:12:59 compute-0 systemd[1]: chronyd.service: Deactivated successfully.
Nov 29 06:12:59 compute-0 systemd[1]: Stopped NTP client/server.
Nov 29 06:12:59 compute-0 systemd[1]: Starting NTP client/server...
Nov 29 06:13:00 compute-0 chronyd[58809]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Nov 29 06:13:00 compute-0 chronyd[58809]: Frequency -28.371 +/- 0.174 ppm read from /var/lib/chrony/drift
Nov 29 06:13:00 compute-0 chronyd[58809]: Loaded seccomp filter (level 2)
Nov 29 06:13:00 compute-0 systemd[1]: Started NTP client/server.
Nov 29 06:13:00 compute-0 sudo[58799]: pam_unix(sudo:session): session closed for user root
Nov 29 06:13:00 compute-0 sshd-session[53851]: Connection closed by 192.168.122.30 port 51492
Nov 29 06:13:00 compute-0 sshd-session[53848]: pam_unix(sshd:session): session closed for user zuul
Nov 29 06:13:00 compute-0 systemd[1]: session-12.scope: Deactivated successfully.
Nov 29 06:13:00 compute-0 systemd[1]: session-12.scope: Consumed 29.075s CPU time.
Nov 29 06:13:00 compute-0 systemd-logind[797]: Session 12 logged out. Waiting for processes to exit.
Nov 29 06:13:00 compute-0 systemd-logind[797]: Removed session 12.
Nov 29 06:13:01 compute-0 sshd-session[58835]: Invalid user in from 104.208.108.166 port 57574
Nov 29 06:13:01 compute-0 sshd-session[58835]: Received disconnect from 104.208.108.166 port 57574:11: Bye Bye [preauth]
Nov 29 06:13:01 compute-0 sshd-session[58835]: Disconnected from invalid user in 104.208.108.166 port 57574 [preauth]
Nov 29 06:13:06 compute-0 sshd-session[58837]: Accepted publickey for zuul from 192.168.122.30 port 59116 ssh2: ECDSA SHA256:q0RMlXdalxA6snNWza7TmIndlwLWLLpO+sXhiGKqO/I
Nov 29 06:13:06 compute-0 systemd-logind[797]: New session 13 of user zuul.
Nov 29 06:13:06 compute-0 systemd[1]: Started Session 13 of User zuul.
Nov 29 06:13:06 compute-0 sshd-session[58837]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 06:13:06 compute-0 sudo[58990]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rqyqvgtwftodohineyhwwmalmgvohyog ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396786.327395-32-138592423762988/AnsiballZ_file.py'
Nov 29 06:13:06 compute-0 sudo[58990]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:13:07 compute-0 python3.9[58992]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:13:07 compute-0 sudo[58990]: pam_unix(sudo:session): session closed for user root
Nov 29 06:13:07 compute-0 sudo[59142]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-atjyyeckagthaacggrkvqxjkknnjwbrn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396787.3012104-68-188575026150956/AnsiballZ_stat.py'
Nov 29 06:13:07 compute-0 sudo[59142]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:13:08 compute-0 python3.9[59144]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:13:08 compute-0 sudo[59142]: pam_unix(sudo:session): session closed for user root
Nov 29 06:13:08 compute-0 sudo[59265]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aickuxlmkktiogfuvjdwonadzvurqotl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396787.3012104-68-188575026150956/AnsiballZ_copy.py'
Nov 29 06:13:08 compute-0 sudo[59265]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:13:08 compute-0 python3.9[59267]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764396787.3012104-68-188575026150956/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:13:08 compute-0 sudo[59265]: pam_unix(sudo:session): session closed for user root
Nov 29 06:13:09 compute-0 sshd-session[58840]: Connection closed by 192.168.122.30 port 59116
Nov 29 06:13:09 compute-0 sshd-session[58837]: pam_unix(sshd:session): session closed for user zuul
Nov 29 06:13:09 compute-0 systemd[1]: session-13.scope: Deactivated successfully.
Nov 29 06:13:09 compute-0 systemd[1]: session-13.scope: Consumed 1.783s CPU time.
Nov 29 06:13:09 compute-0 systemd-logind[797]: Session 13 logged out. Waiting for processes to exit.
Nov 29 06:13:09 compute-0 systemd-logind[797]: Removed session 13.
Nov 29 06:13:14 compute-0 sshd-session[59292]: Accepted publickey for zuul from 192.168.122.30 port 59128 ssh2: ECDSA SHA256:q0RMlXdalxA6snNWza7TmIndlwLWLLpO+sXhiGKqO/I
Nov 29 06:13:14 compute-0 systemd-logind[797]: New session 14 of user zuul.
Nov 29 06:13:14 compute-0 systemd[1]: Started Session 14 of User zuul.
Nov 29 06:13:14 compute-0 sshd-session[59292]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 06:13:15 compute-0 python3.9[59445]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 06:13:16 compute-0 sudo[59599]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-byfvakzsxiuqjkdvsuognjayfqpnwtse ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396796.1452594-63-37543612126293/AnsiballZ_file.py'
Nov 29 06:13:16 compute-0 sudo[59599]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:13:16 compute-0 python3.9[59601]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:13:16 compute-0 sudo[59599]: pam_unix(sudo:session): session closed for user root
Nov 29 06:13:17 compute-0 sudo[59774]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbyxvjmmksysnkrfsvxoakpglgqileub ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396797.1152694-87-90994236813012/AnsiballZ_stat.py'
Nov 29 06:13:17 compute-0 sudo[59774]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:13:17 compute-0 python3.9[59776]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:13:17 compute-0 sudo[59774]: pam_unix(sudo:session): session closed for user root
Nov 29 06:13:18 compute-0 sudo[59897]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zjwqlmmdtzzpenongsvvmclclakrthke ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396797.1152694-87-90994236813012/AnsiballZ_copy.py'
Nov 29 06:13:18 compute-0 sudo[59897]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:13:18 compute-0 python3.9[59899]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1764396797.1152694-87-90994236813012/.source.json _original_basename=.12wbzro5 follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:13:18 compute-0 sudo[59897]: pam_unix(sudo:session): session closed for user root
Nov 29 06:13:19 compute-0 sudo[60049]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-imzbqtyzntmbyybmnxsizigfejeeaoya ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396799.25952-156-10757159972991/AnsiballZ_stat.py'
Nov 29 06:13:19 compute-0 sudo[60049]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:13:19 compute-0 python3.9[60051]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:13:19 compute-0 sudo[60049]: pam_unix(sudo:session): session closed for user root
Nov 29 06:13:20 compute-0 sudo[60172]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hmlhhyauhsihzplmhlgzoqvncvhaajnp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396799.25952-156-10757159972991/AnsiballZ_copy.py'
Nov 29 06:13:20 compute-0 sudo[60172]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:13:20 compute-0 python3.9[60174]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764396799.25952-156-10757159972991/.source _original_basename=.0fck9vaq follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:13:20 compute-0 sudo[60172]: pam_unix(sudo:session): session closed for user root
Nov 29 06:13:21 compute-0 sudo[60325]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjydvsomtktbjbhzkpaalhcaxthlxfxe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396801.0608876-204-225340390963285/AnsiballZ_file.py'
Nov 29 06:13:21 compute-0 sudo[60325]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:13:21 compute-0 python3.9[60328]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 06:13:21 compute-0 sudo[60325]: pam_unix(sudo:session): session closed for user root
Nov 29 06:13:22 compute-0 sshd-session[60322]: Received disconnect from 31.6.212.12 port 58150:11: Bye Bye [preauth]
Nov 29 06:13:22 compute-0 sshd-session[60322]: Disconnected from authenticating user root 31.6.212.12 port 58150 [preauth]
Nov 29 06:13:22 compute-0 sudo[60478]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-svyswpvjgslrjrksvyjoltxnhywsyscp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396801.8954341-228-303409831711/AnsiballZ_stat.py'
Nov 29 06:13:22 compute-0 sudo[60478]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:13:22 compute-0 python3.9[60480]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:13:22 compute-0 sudo[60478]: pam_unix(sudo:session): session closed for user root
Nov 29 06:13:22 compute-0 sudo[60601]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mlbaqchbeinheuhajsortzgerykjavot ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396801.8954341-228-303409831711/AnsiballZ_copy.py'
Nov 29 06:13:22 compute-0 sudo[60601]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:13:23 compute-0 python3.9[60603]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764396801.8954341-228-303409831711/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 29 06:13:23 compute-0 sudo[60601]: pam_unix(sudo:session): session closed for user root
Nov 29 06:13:23 compute-0 sudo[60753]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xluddeqxqfujwizouuitnxmxcgpukarz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396803.3047166-228-78037428667613/AnsiballZ_stat.py'
Nov 29 06:13:23 compute-0 sudo[60753]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:13:23 compute-0 python3.9[60755]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:13:23 compute-0 sudo[60753]: pam_unix(sudo:session): session closed for user root
Nov 29 06:13:24 compute-0 sudo[60876]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uthxmrelmmkkhckclrtpjhxkiusbgskx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396803.3047166-228-78037428667613/AnsiballZ_copy.py'
Nov 29 06:13:24 compute-0 sudo[60876]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:13:24 compute-0 python3.9[60878]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764396803.3047166-228-78037428667613/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 29 06:13:24 compute-0 sudo[60876]: pam_unix(sudo:session): session closed for user root
Nov 29 06:13:24 compute-0 sudo[61028]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lolecrnwuasyvwetcrrbpdswhdiqxcie ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396804.572591-315-101354590644323/AnsiballZ_file.py'
Nov 29 06:13:24 compute-0 sudo[61028]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:13:25 compute-0 python3.9[61030]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:13:25 compute-0 sudo[61028]: pam_unix(sudo:session): session closed for user root
Nov 29 06:13:25 compute-0 sudo[61180]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ayhghsmpgowijhvlsjyqqmrauzgwmvye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396805.4252932-339-101541288583640/AnsiballZ_stat.py'
Nov 29 06:13:25 compute-0 sudo[61180]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:13:26 compute-0 python3.9[61182]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:13:26 compute-0 sudo[61180]: pam_unix(sudo:session): session closed for user root
Nov 29 06:13:26 compute-0 sudo[61303]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yioekutgczrfamkbmzrekkgltpthorde ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396805.4252932-339-101541288583640/AnsiballZ_copy.py'
Nov 29 06:13:26 compute-0 sudo[61303]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:13:26 compute-0 python3.9[61305]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764396805.4252932-339-101541288583640/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:13:26 compute-0 sudo[61303]: pam_unix(sudo:session): session closed for user root
Nov 29 06:13:27 compute-0 sudo[61455]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hoktfigicibzlejcevpyijwzdjqhgbas ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396806.955549-384-258893491244390/AnsiballZ_stat.py'
Nov 29 06:13:27 compute-0 sudo[61455]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:13:27 compute-0 python3.9[61457]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:13:27 compute-0 sudo[61455]: pam_unix(sudo:session): session closed for user root
Nov 29 06:13:27 compute-0 sudo[61578]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jcdlzshvotwxkasjbowwbpsfzkauidem ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396806.955549-384-258893491244390/AnsiballZ_copy.py'
Nov 29 06:13:27 compute-0 sudo[61578]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:13:28 compute-0 python3.9[61580]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764396806.955549-384-258893491244390/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:13:28 compute-0 sudo[61578]: pam_unix(sudo:session): session closed for user root
Nov 29 06:13:29 compute-0 sudo[61730]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kgioooahwtankietwfoevlbujokyeeee ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396808.566801-429-191040750125935/AnsiballZ_systemd.py'
Nov 29 06:13:29 compute-0 sudo[61730]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:13:29 compute-0 python3.9[61732]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 06:13:29 compute-0 systemd[1]: Reloading.
Nov 29 06:13:29 compute-0 systemd-rc-local-generator[61755]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 06:13:29 compute-0 systemd-sysv-generator[61761]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 06:13:29 compute-0 systemd[1]: Reloading.
Nov 29 06:13:29 compute-0 systemd-rc-local-generator[61794]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 06:13:30 compute-0 systemd-sysv-generator[61800]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 06:13:30 compute-0 systemd[1]: Starting EDPM Container Shutdown...
Nov 29 06:13:30 compute-0 systemd[1]: Finished EDPM Container Shutdown.
Nov 29 06:13:30 compute-0 sudo[61730]: pam_unix(sudo:session): session closed for user root
Nov 29 06:13:31 compute-0 sudo[61956]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xlzsphhzgmtnhiucmxbpmeacrhydojfv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396810.780518-453-16139603154109/AnsiballZ_stat.py'
Nov 29 06:13:31 compute-0 sudo[61956]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:13:31 compute-0 python3.9[61958]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:13:31 compute-0 sudo[61956]: pam_unix(sudo:session): session closed for user root
Nov 29 06:13:31 compute-0 sudo[62079]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-siwkceolmwzjafwertbthbnengrvlvsu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396810.780518-453-16139603154109/AnsiballZ_copy.py'
Nov 29 06:13:31 compute-0 sudo[62079]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:13:31 compute-0 python3.9[62081]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764396810.780518-453-16139603154109/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:13:31 compute-0 sudo[62079]: pam_unix(sudo:session): session closed for user root
Nov 29 06:13:32 compute-0 sshd-session[62082]: Invalid user alex from 79.116.35.29 port 54384
Nov 29 06:13:32 compute-0 sshd-session[62082]: Received disconnect from 79.116.35.29 port 54384:11: Bye Bye [preauth]
Nov 29 06:13:32 compute-0 sshd-session[62082]: Disconnected from invalid user alex 79.116.35.29 port 54384 [preauth]
Nov 29 06:13:32 compute-0 sudo[62233]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uhknxqjmfpykcwxnaagzeretlahxkbbo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396812.2579424-498-56510016278911/AnsiballZ_stat.py'
Nov 29 06:13:32 compute-0 sudo[62233]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:13:32 compute-0 python3.9[62235]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:13:32 compute-0 sudo[62233]: pam_unix(sudo:session): session closed for user root
Nov 29 06:13:33 compute-0 sudo[62356]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lsabuaqvizivwiszqslwmlzqeqhaiudj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396812.2579424-498-56510016278911/AnsiballZ_copy.py'
Nov 29 06:13:33 compute-0 sudo[62356]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:13:33 compute-0 python3.9[62358]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764396812.2579424-498-56510016278911/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:13:33 compute-0 sudo[62356]: pam_unix(sudo:session): session closed for user root
Nov 29 06:13:34 compute-0 sudo[62508]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-plxkepqbnqofwtsudrgaykxamtmlhnvl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396813.6966927-543-213340260270222/AnsiballZ_systemd.py'
Nov 29 06:13:34 compute-0 sudo[62508]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:13:34 compute-0 python3.9[62510]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 06:13:34 compute-0 systemd[1]: Reloading.
Nov 29 06:13:34 compute-0 systemd-rc-local-generator[62534]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 06:13:34 compute-0 systemd-sysv-generator[62539]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 06:13:34 compute-0 systemd[1]: Reloading.
Nov 29 06:13:34 compute-0 systemd-rc-local-generator[62569]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 06:13:34 compute-0 systemd-sysv-generator[62577]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 06:13:34 compute-0 systemd[1]: Starting Create netns directory...
Nov 29 06:13:34 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 29 06:13:34 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 29 06:13:34 compute-0 systemd[1]: Finished Create netns directory.
Nov 29 06:13:34 compute-0 sudo[62508]: pam_unix(sudo:session): session closed for user root
Nov 29 06:13:36 compute-0 python3.9[62735]: ansible-ansible.builtin.service_facts Invoked
Nov 29 06:13:36 compute-0 network[62752]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 29 06:13:36 compute-0 network[62753]: 'network-scripts' will be removed from distribution in near future.
Nov 29 06:13:36 compute-0 network[62754]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 29 06:13:40 compute-0 sudo[63017]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljzcksophivlgvtindbvbuwjqewxfhfa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396820.5552778-591-39226904160437/AnsiballZ_systemd.py'
Nov 29 06:13:40 compute-0 sudo[63017]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:13:41 compute-0 python3.9[63019]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 06:13:41 compute-0 sshd-session[62890]: Invalid user support from 103.147.159.91 port 51978
Nov 29 06:13:41 compute-0 sshd-session[62890]: Received disconnect from 103.147.159.91 port 51978:11: Bye Bye [preauth]
Nov 29 06:13:41 compute-0 sshd-session[62890]: Disconnected from invalid user support 103.147.159.91 port 51978 [preauth]
Nov 29 06:13:42 compute-0 systemd[1]: Reloading.
Nov 29 06:13:42 compute-0 systemd-rc-local-generator[63050]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 06:13:42 compute-0 systemd-sysv-generator[63055]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 06:13:42 compute-0 systemd[1]: Stopping IPv4 firewall with iptables...
Nov 29 06:13:42 compute-0 iptables.init[63060]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Nov 29 06:13:42 compute-0 iptables.init[63060]: iptables: Flushing firewall rules: [  OK  ]
Nov 29 06:13:42 compute-0 systemd[1]: iptables.service: Deactivated successfully.
Nov 29 06:13:42 compute-0 systemd[1]: Stopped IPv4 firewall with iptables.
Nov 29 06:13:43 compute-0 sudo[63017]: pam_unix(sudo:session): session closed for user root
Nov 29 06:13:43 compute-0 sudo[63254]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zwkhblodvmuihymmltodpxstzswuxfvo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396823.202232-591-947625814647/AnsiballZ_systemd.py'
Nov 29 06:13:43 compute-0 sudo[63254]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:13:43 compute-0 python3.9[63256]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 06:13:43 compute-0 sudo[63254]: pam_unix(sudo:session): session closed for user root
Nov 29 06:13:44 compute-0 sudo[63408]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ofivquwjldomejblopvllxgyeazpzgbx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396824.3513632-639-201897870738402/AnsiballZ_systemd.py'
Nov 29 06:13:44 compute-0 sudo[63408]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:13:44 compute-0 python3.9[63410]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 06:13:45 compute-0 systemd[1]: Reloading.
Nov 29 06:13:45 compute-0 systemd-rc-local-generator[63445]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 06:13:45 compute-0 systemd-sysv-generator[63448]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 06:13:45 compute-0 systemd[1]: Starting Netfilter Tables...
Nov 29 06:13:45 compute-0 systemd[1]: Finished Netfilter Tables.
Nov 29 06:13:45 compute-0 sudo[63408]: pam_unix(sudo:session): session closed for user root
Nov 29 06:13:46 compute-0 sudo[63603]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nyuepfehbdudpzjwhekbdxwkobuecljb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396825.7724252-663-146594215652802/AnsiballZ_command.py'
Nov 29 06:13:46 compute-0 sudo[63603]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:13:46 compute-0 sshd-session[63411]: Received disconnect from 115.190.37.201 port 39998:11: Bye Bye [preauth]
Nov 29 06:13:46 compute-0 sshd-session[63411]: Disconnected from authenticating user root 115.190.37.201 port 39998 [preauth]
Nov 29 06:13:46 compute-0 python3.9[63605]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:13:46 compute-0 sudo[63603]: pam_unix(sudo:session): session closed for user root
Nov 29 06:13:47 compute-0 sudo[63756]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yfbahobmwkemjbcjbovstxhksyeejiiz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396827.1078682-705-203401036606472/AnsiballZ_stat.py'
Nov 29 06:13:47 compute-0 sudo[63756]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:13:47 compute-0 python3.9[63758]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:13:47 compute-0 sudo[63756]: pam_unix(sudo:session): session closed for user root
Nov 29 06:13:48 compute-0 sudo[63881]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wbnskjlzjasrpnhcjggeuhwjeikdfuvn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396827.1078682-705-203401036606472/AnsiballZ_copy.py'
Nov 29 06:13:48 compute-0 sudo[63881]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:13:48 compute-0 python3.9[63883]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764396827.1078682-705-203401036606472/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:13:48 compute-0 sudo[63881]: pam_unix(sudo:session): session closed for user root
Nov 29 06:13:49 compute-0 sudo[64034]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pxqmqixmplwjdfanarrgbenrdnczkrnl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396828.718336-750-205637275614992/AnsiballZ_systemd.py'
Nov 29 06:13:49 compute-0 sudo[64034]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:13:49 compute-0 python3.9[64036]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 06:13:49 compute-0 systemd[1]: Reloading OpenSSH server daemon...
Nov 29 06:13:49 compute-0 sshd[1008]: Received SIGHUP; restarting.
Nov 29 06:13:49 compute-0 sshd[1008]: Server listening on 0.0.0.0 port 22.
Nov 29 06:13:49 compute-0 sshd[1008]: Server listening on :: port 22.
Nov 29 06:13:49 compute-0 systemd[1]: Reloaded OpenSSH server daemon.
Nov 29 06:13:49 compute-0 sudo[64034]: pam_unix(sudo:session): session closed for user root
Nov 29 06:13:50 compute-0 sudo[64190]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vyhhcgeakvxgsbmgsedspluzncwpuxfx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396829.728814-774-120736120917705/AnsiballZ_file.py'
Nov 29 06:13:50 compute-0 sudo[64190]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:13:50 compute-0 python3.9[64192]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:13:50 compute-0 sudo[64190]: pam_unix(sudo:session): session closed for user root
Nov 29 06:13:50 compute-0 sudo[64342]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-scivykjlnqrtbdfnsnnzeqoibcbcpiph ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396830.5517242-798-165049261770878/AnsiballZ_stat.py'
Nov 29 06:13:50 compute-0 sudo[64342]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:13:51 compute-0 python3.9[64344]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:13:51 compute-0 sudo[64342]: pam_unix(sudo:session): session closed for user root
Nov 29 06:13:51 compute-0 sudo[64465]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbrfkblgifpvqfxlekzgdjuobfnouhac ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396830.5517242-798-165049261770878/AnsiballZ_copy.py'
Nov 29 06:13:51 compute-0 sudo[64465]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:13:51 compute-0 python3.9[64467]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764396830.5517242-798-165049261770878/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:13:51 compute-0 sudo[64465]: pam_unix(sudo:session): session closed for user root
Nov 29 06:13:52 compute-0 sudo[64617]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uwbxqzjyhdbrnsnrwecdgbleikycpcla ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396832.4274843-852-13931511806677/AnsiballZ_timezone.py'
Nov 29 06:13:52 compute-0 sudo[64617]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:13:53 compute-0 python3.9[64619]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Nov 29 06:13:53 compute-0 systemd[1]: Starting Time & Date Service...
Nov 29 06:13:53 compute-0 systemd[1]: Started Time & Date Service.
Nov 29 06:13:53 compute-0 sudo[64617]: pam_unix(sudo:session): session closed for user root
Nov 29 06:13:54 compute-0 sudo[64773]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gcuiatswmzgnaodhmajlwijbvaxxnuzh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396833.6571321-879-275336539989811/AnsiballZ_file.py'
Nov 29 06:13:54 compute-0 sudo[64773]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:13:54 compute-0 python3.9[64775]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:13:54 compute-0 sudo[64773]: pam_unix(sudo:session): session closed for user root
Nov 29 06:13:54 compute-0 sudo[64925]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mzplczgltmbeyteshabfoenhcrljhdvv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396834.4880562-903-277422186402380/AnsiballZ_stat.py'
Nov 29 06:13:54 compute-0 sudo[64925]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:13:55 compute-0 python3.9[64927]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:13:55 compute-0 sudo[64925]: pam_unix(sudo:session): session closed for user root
Nov 29 06:13:55 compute-0 sudo[65048]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqylorthpcmdcsppmgapkdwhjvsgnzmt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396834.4880562-903-277422186402380/AnsiballZ_copy.py'
Nov 29 06:13:55 compute-0 sudo[65048]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:13:55 compute-0 python3.9[65050]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764396834.4880562-903-277422186402380/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:13:55 compute-0 sudo[65048]: pam_unix(sudo:session): session closed for user root
Nov 29 06:13:56 compute-0 sudo[65200]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jhjrydbebkumrmmcopzvaezkihlponke ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396835.9894624-948-186026062831091/AnsiballZ_stat.py'
Nov 29 06:13:56 compute-0 sudo[65200]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:13:56 compute-0 python3.9[65202]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:13:56 compute-0 sudo[65200]: pam_unix(sudo:session): session closed for user root
Nov 29 06:13:57 compute-0 sudo[65323]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jcmaqjgjetxbieboyulxvctplmkgsxet ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396835.9894624-948-186026062831091/AnsiballZ_copy.py'
Nov 29 06:13:57 compute-0 sudo[65323]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:13:57 compute-0 python3.9[65325]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764396835.9894624-948-186026062831091/.source.yaml _original_basename=.wxruc_29 follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:13:57 compute-0 sudo[65323]: pam_unix(sudo:session): session closed for user root
Nov 29 06:13:57 compute-0 sudo[65475]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bqxrlvfpiqhqlyhqikbagjpxkdlaixnx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396837.575144-993-85194286364716/AnsiballZ_stat.py'
Nov 29 06:13:57 compute-0 sudo[65475]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:13:58 compute-0 python3.9[65477]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:13:58 compute-0 sudo[65475]: pam_unix(sudo:session): session closed for user root
Nov 29 06:13:58 compute-0 sudo[65600]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-btzbpgcvdripfjgrpdxqjfmdbfohuhtt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396837.575144-993-85194286364716/AnsiballZ_copy.py'
Nov 29 06:13:58 compute-0 sudo[65600]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:13:58 compute-0 python3.9[65602]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764396837.575144-993-85194286364716/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:13:58 compute-0 sudo[65600]: pam_unix(sudo:session): session closed for user root
Nov 29 06:13:58 compute-0 sshd-session[65478]: Received disconnect from 138.124.186.225 port 59856:11: Bye Bye [preauth]
Nov 29 06:13:58 compute-0 sshd-session[65478]: Disconnected from authenticating user root 138.124.186.225 port 59856 [preauth]
Nov 29 06:13:59 compute-0 sudo[65752]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zgsboptdtjbqdubkjvhjzggmsfradcan ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396839.1831186-1038-103497158102079/AnsiballZ_command.py'
Nov 29 06:13:59 compute-0 sudo[65752]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:13:59 compute-0 python3.9[65754]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:13:59 compute-0 sudo[65752]: pam_unix(sudo:session): session closed for user root
Nov 29 06:14:00 compute-0 sudo[65905]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-auitvhhqbesxbimcyretlaiikbvfotoq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396840.080057-1062-117966187873003/AnsiballZ_command.py'
Nov 29 06:14:00 compute-0 sudo[65905]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:14:00 compute-0 python3.9[65907]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:14:00 compute-0 sudo[65905]: pam_unix(sudo:session): session closed for user root
Nov 29 06:14:01 compute-0 sudo[66058]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uhnzdoqynvxhoimztdhtawopjyuuojce ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764396840.980857-1086-187530523189923/AnsiballZ_edpm_nftables_from_files.py'
Nov 29 06:14:01 compute-0 sudo[66058]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:14:01 compute-0 python3[66060]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 29 06:14:01 compute-0 sudo[66058]: pam_unix(sudo:session): session closed for user root
Nov 29 06:14:02 compute-0 sudo[66210]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xybtqkvicgghvtpdockscukwgttbhsyp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396841.8911374-1110-108776690523190/AnsiballZ_stat.py'
Nov 29 06:14:02 compute-0 sudo[66210]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:14:02 compute-0 python3.9[66212]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:14:02 compute-0 sudo[66210]: pam_unix(sudo:session): session closed for user root
Nov 29 06:14:02 compute-0 sudo[66333]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-makemyrtnfccatscpepmgecdakcjhiuq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396841.8911374-1110-108776690523190/AnsiballZ_copy.py'
Nov 29 06:14:02 compute-0 sudo[66333]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:14:03 compute-0 python3.9[66335]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764396841.8911374-1110-108776690523190/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:14:03 compute-0 sudo[66333]: pam_unix(sudo:session): session closed for user root
Nov 29 06:14:03 compute-0 sudo[66485]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rlghrdydibtniwqnsfmqqtfhvvklcyps ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396843.5071545-1155-183753486107571/AnsiballZ_stat.py'
Nov 29 06:14:03 compute-0 sudo[66485]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:14:04 compute-0 python3.9[66487]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:14:04 compute-0 sudo[66485]: pam_unix(sudo:session): session closed for user root
Nov 29 06:14:04 compute-0 sudo[66608]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kozrxnzsifxrujmsbpojkbjokuzzfyxo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396843.5071545-1155-183753486107571/AnsiballZ_copy.py'
Nov 29 06:14:04 compute-0 sudo[66608]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:14:04 compute-0 python3.9[66610]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764396843.5071545-1155-183753486107571/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:14:04 compute-0 sudo[66608]: pam_unix(sudo:session): session closed for user root
Nov 29 06:14:05 compute-0 sudo[66760]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aszglmizxcygkvgcsryzhqseudbcxxmt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396845.110764-1200-40766953432673/AnsiballZ_stat.py'
Nov 29 06:14:05 compute-0 sudo[66760]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:14:05 compute-0 python3.9[66762]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:14:05 compute-0 sudo[66760]: pam_unix(sudo:session): session closed for user root
Nov 29 06:14:06 compute-0 sudo[66883]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-otamocphgaqianzyknwkvtffefyvlpcv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396845.110764-1200-40766953432673/AnsiballZ_copy.py'
Nov 29 06:14:06 compute-0 sudo[66883]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:14:06 compute-0 python3.9[66885]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764396845.110764-1200-40766953432673/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:14:06 compute-0 sudo[66883]: pam_unix(sudo:session): session closed for user root
Nov 29 06:14:07 compute-0 sudo[67035]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ppmlcriwbptyenmnfbinogtxcdlqnprd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396846.6498346-1245-83437109346091/AnsiballZ_stat.py'
Nov 29 06:14:07 compute-0 sudo[67035]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:14:07 compute-0 python3.9[67037]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:14:07 compute-0 sudo[67035]: pam_unix(sudo:session): session closed for user root
Nov 29 06:14:07 compute-0 sudo[67158]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dtfqrimsrwievhacutdwvvptqwdgjxvi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396846.6498346-1245-83437109346091/AnsiballZ_copy.py'
Nov 29 06:14:07 compute-0 sudo[67158]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:14:07 compute-0 python3.9[67160]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764396846.6498346-1245-83437109346091/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:14:07 compute-0 sudo[67158]: pam_unix(sudo:session): session closed for user root
Nov 29 06:14:08 compute-0 sudo[67310]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rutkwoybtpbanbnvemsblvjlozkfphux ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396848.1760237-1290-124059481833104/AnsiballZ_stat.py'
Nov 29 06:14:08 compute-0 sudo[67310]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:14:08 compute-0 python3.9[67312]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:14:08 compute-0 sudo[67310]: pam_unix(sudo:session): session closed for user root
Nov 29 06:14:09 compute-0 sudo[67433]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vxenhcfiygndoxboudvitdpsdkeotwhl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396848.1760237-1290-124059481833104/AnsiballZ_copy.py'
Nov 29 06:14:09 compute-0 sudo[67433]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:14:09 compute-0 python3.9[67435]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764396848.1760237-1290-124059481833104/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:14:09 compute-0 sudo[67433]: pam_unix(sudo:session): session closed for user root
Nov 29 06:14:10 compute-0 sudo[67585]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xzufdzrjrwqezbciqxgqozfostnbvmlm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396849.7620225-1335-73693202577575/AnsiballZ_file.py'
Nov 29 06:14:10 compute-0 sudo[67585]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:14:10 compute-0 python3.9[67587]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:14:10 compute-0 sudo[67585]: pam_unix(sudo:session): session closed for user root
Nov 29 06:14:10 compute-0 sudo[67737]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zlhosuhipjgqnwdcllybwozidjgoapzy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396850.6343126-1359-218830074490994/AnsiballZ_command.py'
Nov 29 06:14:10 compute-0 sudo[67737]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:14:11 compute-0 python3.9[67739]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:14:11 compute-0 sudo[67737]: pam_unix(sudo:session): session closed for user root
Nov 29 06:14:12 compute-0 sudo[67896]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hpbkygwyxbsjpmfzzfcfqxlgaqwuyshl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396851.5941193-1383-33753048200144/AnsiballZ_blockinfile.py'
Nov 29 06:14:12 compute-0 sudo[67896]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:14:12 compute-0 python3.9[67898]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                            include "/etc/nftables/edpm-chains.nft"
                                            include "/etc/nftables/edpm-rules.nft"
                                            include "/etc/nftables/edpm-jumps.nft"
                                             path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:14:12 compute-0 sudo[67896]: pam_unix(sudo:session): session closed for user root
Nov 29 06:14:13 compute-0 sudo[68049]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eyzbwqqqnddrcxoviconpfafantvkuxz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396852.8162906-1410-92808759968069/AnsiballZ_file.py'
Nov 29 06:14:13 compute-0 sudo[68049]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:14:13 compute-0 python3.9[68051]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:14:13 compute-0 sudo[68049]: pam_unix(sudo:session): session closed for user root
Nov 29 06:14:14 compute-0 sudo[68203]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-biamhoymeclsqkinbyrllzffzwkdcbqy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396853.5017116-1410-91429290910577/AnsiballZ_file.py'
Nov 29 06:14:14 compute-0 sudo[68203]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:14:14 compute-0 python3.9[68205]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:14:14 compute-0 sudo[68203]: pam_unix(sudo:session): session closed for user root
Nov 29 06:14:14 compute-0 sshd-session[68089]: Received disconnect from 104.208.108.166 port 28952:11: Bye Bye [preauth]
Nov 29 06:14:14 compute-0 sshd-session[68089]: Disconnected from authenticating user root 104.208.108.166 port 28952 [preauth]
Nov 29 06:14:15 compute-0 sudo[68355]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-caspygsusqcljohpppdohuirrkaggdvb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396854.7551026-1455-52395157239938/AnsiballZ_mount.py'
Nov 29 06:14:15 compute-0 sudo[68355]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:14:15 compute-0 python3.9[68357]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 29 06:14:15 compute-0 sudo[68355]: pam_unix(sudo:session): session closed for user root
Nov 29 06:14:15 compute-0 sudo[68508]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pipsbpvrqyqllewhrizxbmvnmbkruxqg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396855.5946026-1455-39979062015611/AnsiballZ_mount.py'
Nov 29 06:14:15 compute-0 sudo[68508]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:14:16 compute-0 python3.9[68510]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 29 06:14:16 compute-0 sudo[68508]: pam_unix(sudo:session): session closed for user root
Nov 29 06:14:16 compute-0 sshd-session[59295]: Connection closed by 192.168.122.30 port 59128
Nov 29 06:14:16 compute-0 sshd-session[59292]: pam_unix(sshd:session): session closed for user zuul
Nov 29 06:14:16 compute-0 systemd[1]: session-14.scope: Deactivated successfully.
Nov 29 06:14:16 compute-0 systemd[1]: session-14.scope: Consumed 39.685s CPU time.
Nov 29 06:14:16 compute-0 systemd-logind[797]: Session 14 logged out. Waiting for processes to exit.
Nov 29 06:14:16 compute-0 systemd-logind[797]: Removed session 14.
Nov 29 06:14:23 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 29 06:14:26 compute-0 sshd-session[68540]: Accepted publickey for zuul from 192.168.122.30 port 47344 ssh2: ECDSA SHA256:q0RMlXdalxA6snNWza7TmIndlwLWLLpO+sXhiGKqO/I
Nov 29 06:14:26 compute-0 systemd-logind[797]: New session 15 of user zuul.
Nov 29 06:14:26 compute-0 systemd[1]: Started Session 15 of User zuul.
Nov 29 06:14:26 compute-0 sshd-session[68540]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 06:14:27 compute-0 sudo[68693]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dcelvzqctbxpfyqugwrfyflwucjxcfgr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396866.651918-23-21352854862978/AnsiballZ_tempfile.py'
Nov 29 06:14:27 compute-0 sudo[68693]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:14:27 compute-0 python3.9[68695]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Nov 29 06:14:27 compute-0 sudo[68693]: pam_unix(sudo:session): session closed for user root
Nov 29 06:14:28 compute-0 sudo[68845]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vpfowxziakjyypafovvdofjgvljrpjwh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396867.6232631-59-117152868983217/AnsiballZ_stat.py'
Nov 29 06:14:28 compute-0 sudo[68845]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:14:28 compute-0 python3.9[68847]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 06:14:28 compute-0 sudo[68845]: pam_unix(sudo:session): session closed for user root
Nov 29 06:14:29 compute-0 sudo[68997]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqoqbjvaesxsefhtylzfpmvymgvzqvqk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396868.6797953-89-125486918688865/AnsiballZ_setup.py'
Nov 29 06:14:29 compute-0 sudo[68997]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:14:29 compute-0 python3.9[68999]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 06:14:29 compute-0 sudo[68997]: pam_unix(sudo:session): session closed for user root
Nov 29 06:14:30 compute-0 sudo[69149]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gradwhhqmryibunkzymhagmvhcuxytov ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396869.9516137-114-109229547376043/AnsiballZ_blockinfile.py'
Nov 29 06:14:30 compute-0 sudo[69149]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:14:30 compute-0 python3.9[69151]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCX0dhB1m0xL0qEi5jnTQLLB4bvueVV5foNrqU/OkfV/4gRyp7uP2q21lWq5Dtl2GLk51pS6oD41RI41Y5g7OSRs8b1Z66d6X1QgX0Qns6pv7FwmNSQ25+2VGV6lppnaN5e+JHiwTmzpf82hl/MiiJrHo7B63mllKyl9SZJxUhP9RR4czS3QNYQsZyP7sZeCWothTZ2Q/GK4BWBEtj2+ifeOpa342IivopCH05YVQOx9bpsdFHMYaalMDCwvr2lfVns8aTcpJ3z9uE8wLdKWTyiinT7nuLX6RuPwhXB2proBRH1wrGSIUgcVcizkWn8QizD8LlsGFcHIQJkmq+sJz6r7cCZLIfS6hdAzI+hYbJie6n/agwfxe4r+mbXsmmC6ALKKk7CEnaiNnDg0fgTaUfBPwSfu+JmVrjdSO+S8f/CMbtYeO6QknOxhLV9oK6knszv7nLlSYXTzXanHkN4Y0fW3dsSvoE+qDR0YijbbT8slqMd6z95wWVDFUmTcN8Nzk8=
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILci1PI4hoB56+xxS5gSMKceuJ/dv6t7etpmtENwoSFr
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJIaOLr2ntjSUcigXC7a0sFoonsuh0ChCx2a1R6G8EDmJ8/ZB8NEiJE6KAQJDNU5XsXjuaC44eJhOUMRK9r98xA=
                                            compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC2GXKCQiCwQEMihcSwDVeJtG2CpTemmA6MTbtOkxbB3OAV5PK8v8imPvDGMDurfGFQG0RzWyv9szlMJXdgIkwejIfy/AY7p6nemHOpu6DdAx0EA/jg1YcOIeeEhyMw1/oFzjYClGMohaI1oTKHtR29UXWphTAroOkf26Exvco6hh2ApRTXV9ObzSoOyCC7+OZcOWgYzdoCfu/0FDGkH2ksKLQS7d4AAh/XZ/njXhK57U7ptxHCReUPECGRv7KB4f8TelZDAIeUyp7ngd/9ivUDO1zue1Qr9ECzTzAFqippGXFmYl3+oSid03CY7bqnxav4xWt7UukbaO57goyIPfkklPdC1kA7kZqa9bqeDU1WgDkqnLu8hluArB0Y0Jz+hDfx9pTbAL6MklraoLaGrnrgcibAollAN+7WGqdWxUotENYaljO7P1Z18MlNllWFzk4Le5jMLNL8qArSlzM+ufOThnLdGEuYZhH1x969AisGQ4MQWn0P0lZFu6fE5VSNA/k=
                                            compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDdPWx5WoFJTxz6PiFZL5f3XrtE682RjGFiIpoe0LXZO
                                            compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFQlZMweHfLYiJFtm1r2tQze/oNx6KzgaXkK+Kof7POk0cFMLbTsXU8qgbQMh4o5LVO0Hbas4mAqxRkGcFCg2Po=
                                            compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDUVpPatup3d17omeiTdJaYR8jCcDbraJSPBxWy49Wxst4G+6/lD41HVIKmjgCgIbbmYSFBPQmoXt4gFXP4FRKna6AbQWi0kwF3/T2biQ2qCid0HVDSS8YRVlyrpdVc1/bIg6YNLkGnhzOMp0S1443+cg5PqutAbrAT1LOg6lSBu+K9gIqJ4un3l2guSweoyba5UhMyjrq4Pffx1QCuBggtYSjmA9Q1r5VVNc2J7AbP0QuzOe6J6DhpdGJsfmHDVXZb/4b/aPUdCTKkLseyUtcqElWVhhnGnpYSJdN81ejalSktGHE4JRHih19wwTokiKvoczUgijBzOfl+kt2ELcpDgzpzY0M9yd0Zz7wrK4rLM6hi8x3LYZXZv8N7KnawUcJ2jfzilx1BVLdNzgwDNB7ZlP4O9Vs3fKnBufCUFPNcRyWl6ooczepbgxqgSbr/Ham2O4/qzvJmzLtu0KxBkaFALRWnyM39nYVE/jrMKJ5ihtVDxIY9FGma/Jifg15gqI0=
                                            compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIN19pK3a7AH/OiwlqJTVWP/qzU/QzkC16s4D1xY1Vn6J
                                            compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLsXsjJNPVMX1YVTe2oBmcZpUSiv3HOeuICgZtQun4hTopMXH9dE1jQeUruGwqZ+NsKW6X2bLZZJ0/tcn2owL8Q=
                                             create=True mode=0644 path=/tmp/ansible.tvsuidq6 state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:14:30 compute-0 sudo[69149]: pam_unix(sudo:session): session closed for user root
Nov 29 06:14:31 compute-0 sudo[69301]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dsiwkbouooosllivhxxurhkbxstykzsb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396870.9266045-138-152415316628271/AnsiballZ_command.py'
Nov 29 06:14:31 compute-0 sudo[69301]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:14:31 compute-0 python3.9[69303]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.tvsuidq6' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:14:31 compute-0 sudo[69301]: pam_unix(sudo:session): session closed for user root
Nov 29 06:14:32 compute-0 sudo[69455]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-maibeqdrrnzaqgmkdhcmanjbsdthilys ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396871.8427837-162-168507537721354/AnsiballZ_file.py'
Nov 29 06:14:32 compute-0 sudo[69455]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:14:32 compute-0 python3.9[69457]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.tvsuidq6 state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:14:32 compute-0 sudo[69455]: pam_unix(sudo:session): session closed for user root
Nov 29 06:14:32 compute-0 sshd-session[68543]: Connection closed by 192.168.122.30 port 47344
Nov 29 06:14:33 compute-0 sshd-session[68540]: pam_unix(sshd:session): session closed for user zuul
Nov 29 06:14:33 compute-0 systemd[1]: session-15.scope: Deactivated successfully.
Nov 29 06:14:33 compute-0 systemd[1]: session-15.scope: Consumed 3.811s CPU time.
Nov 29 06:14:33 compute-0 systemd-logind[797]: Session 15 logged out. Waiting for processes to exit.
Nov 29 06:14:33 compute-0 systemd-logind[797]: Removed session 15.
Nov 29 06:14:38 compute-0 sshd-session[69482]: Invalid user deploy from 79.116.35.29 port 53696
Nov 29 06:14:38 compute-0 sshd-session[69482]: Received disconnect from 79.116.35.29 port 53696:11: Bye Bye [preauth]
Nov 29 06:14:38 compute-0 sshd-session[69482]: Disconnected from invalid user deploy 79.116.35.29 port 53696 [preauth]
Nov 29 06:14:39 compute-0 sshd-session[69484]: Accepted publickey for zuul from 192.168.122.30 port 55680 ssh2: ECDSA SHA256:q0RMlXdalxA6snNWza7TmIndlwLWLLpO+sXhiGKqO/I
Nov 29 06:14:39 compute-0 systemd-logind[797]: New session 16 of user zuul.
Nov 29 06:14:39 compute-0 systemd[1]: Started Session 16 of User zuul.
Nov 29 06:14:39 compute-0 sshd-session[69484]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 06:14:40 compute-0 python3.9[69637]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 06:14:41 compute-0 sudo[69791]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hkwiwilykwpqqfapaydvnbwcpuozcbjv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396880.5964968-61-146992827709947/AnsiballZ_systemd.py'
Nov 29 06:14:41 compute-0 sudo[69791]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:14:41 compute-0 python3.9[69793]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Nov 29 06:14:41 compute-0 sudo[69791]: pam_unix(sudo:session): session closed for user root
Nov 29 06:14:42 compute-0 sudo[69945]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-byjfvqkaxoqprdopdaiiiawvmdgskskr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396881.8321676-85-226916720210523/AnsiballZ_systemd.py'
Nov 29 06:14:42 compute-0 sudo[69945]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:14:42 compute-0 python3.9[69947]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 06:14:42 compute-0 sudo[69945]: pam_unix(sudo:session): session closed for user root
Nov 29 06:14:44 compute-0 sudo[70098]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aoeyhfrbwlapdiyqpghyevjrypwffvpw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396883.8581312-112-226320162041898/AnsiballZ_command.py'
Nov 29 06:14:44 compute-0 sudo[70098]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:14:44 compute-0 python3.9[70100]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:14:44 compute-0 sudo[70098]: pam_unix(sudo:session): session closed for user root
Nov 29 06:14:45 compute-0 sudo[70251]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xlcveouwviqdkfyoxnnemngdsudtidwa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396884.8037426-136-243473057541317/AnsiballZ_stat.py'
Nov 29 06:14:45 compute-0 sudo[70251]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:14:45 compute-0 python3.9[70253]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 06:14:45 compute-0 sudo[70251]: pam_unix(sudo:session): session closed for user root
Nov 29 06:14:46 compute-0 sudo[70405]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pkmxktvxmghoanheqsehnrecncwmaarp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396885.758041-160-45005293803222/AnsiballZ_command.py'
Nov 29 06:14:46 compute-0 sudo[70405]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:14:46 compute-0 python3.9[70407]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:14:46 compute-0 sudo[70405]: pam_unix(sudo:session): session closed for user root
Nov 29 06:14:47 compute-0 sudo[70560]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xlhyeznbovbtmgfyaicsehhikeowtgop ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396886.5861688-184-187240356402172/AnsiballZ_file.py'
Nov 29 06:14:47 compute-0 sudo[70560]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:14:47 compute-0 python3.9[70562]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:14:47 compute-0 sudo[70560]: pam_unix(sudo:session): session closed for user root
Nov 29 06:14:47 compute-0 sshd-session[69487]: Connection closed by 192.168.122.30 port 55680
Nov 29 06:14:47 compute-0 sshd-session[69484]: pam_unix(sshd:session): session closed for user zuul
Nov 29 06:14:47 compute-0 systemd[1]: session-16.scope: Deactivated successfully.
Nov 29 06:14:47 compute-0 systemd[1]: session-16.scope: Consumed 4.898s CPU time.
Nov 29 06:14:47 compute-0 systemd-logind[797]: Session 16 logged out. Waiting for processes to exit.
Nov 29 06:14:47 compute-0 systemd-logind[797]: Removed session 16.
Nov 29 06:14:51 compute-0 sshd-session[70587]: Invalid user sammy from 31.6.212.12 port 35608
Nov 29 06:14:51 compute-0 sshd-session[70587]: Received disconnect from 31.6.212.12 port 35608:11: Bye Bye [preauth]
Nov 29 06:14:51 compute-0 sshd-session[70587]: Disconnected from invalid user sammy 31.6.212.12 port 35608 [preauth]
Nov 29 06:14:53 compute-0 sshd-session[70589]: Accepted publickey for zuul from 192.168.122.30 port 48870 ssh2: ECDSA SHA256:q0RMlXdalxA6snNWza7TmIndlwLWLLpO+sXhiGKqO/I
Nov 29 06:14:53 compute-0 systemd-logind[797]: New session 17 of user zuul.
Nov 29 06:14:53 compute-0 systemd[1]: Started Session 17 of User zuul.
Nov 29 06:14:53 compute-0 sshd-session[70589]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 06:14:54 compute-0 python3.9[70742]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 06:14:55 compute-0 sudo[70896]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cvryiiyednjlodxglilbmikgccefowuf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396894.8554342-67-117807790420879/AnsiballZ_setup.py'
Nov 29 06:14:55 compute-0 sudo[70896]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:14:55 compute-0 python3.9[70898]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 06:14:55 compute-0 sudo[70896]: pam_unix(sudo:session): session closed for user root
Nov 29 06:14:56 compute-0 sudo[70980]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bfaqqgixybjitrxgmqvaihdnggwolbyx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764396894.8554342-67-117807790420879/AnsiballZ_dnf.py'
Nov 29 06:14:56 compute-0 sudo[70980]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:14:56 compute-0 python3.9[70982]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 29 06:14:57 compute-0 sudo[70980]: pam_unix(sudo:session): session closed for user root
Nov 29 06:14:58 compute-0 python3.9[71133]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:15:00 compute-0 python3.9[71284]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 29 06:15:00 compute-0 python3.9[71434]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 06:15:00 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 06:15:01 compute-0 python3.9[71585]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 06:15:02 compute-0 sshd-session[70592]: Connection closed by 192.168.122.30 port 48870
Nov 29 06:15:02 compute-0 sshd-session[70589]: pam_unix(sshd:session): session closed for user zuul
Nov 29 06:15:02 compute-0 systemd[1]: session-17.scope: Deactivated successfully.
Nov 29 06:15:02 compute-0 systemd[1]: session-17.scope: Consumed 6.465s CPU time.
Nov 29 06:15:02 compute-0 systemd-logind[797]: Session 17 logged out. Waiting for processes to exit.
Nov 29 06:15:02 compute-0 systemd-logind[797]: Removed session 17.
Nov 29 06:15:02 compute-0 sshd-session[71610]: Invalid user hamed from 138.124.186.225 port 45592
Nov 29 06:15:02 compute-0 sshd-session[71610]: Received disconnect from 138.124.186.225 port 45592:11: Bye Bye [preauth]
Nov 29 06:15:02 compute-0 sshd-session[71610]: Disconnected from invalid user hamed 138.124.186.225 port 45592 [preauth]
Nov 29 06:15:05 compute-0 sshd-session[71612]: Received disconnect from 103.147.159.91 port 52104:11: Bye Bye [preauth]
Nov 29 06:15:05 compute-0 sshd-session[71612]: Disconnected from authenticating user root 103.147.159.91 port 52104 [preauth]
Nov 29 06:15:09 compute-0 chronyd[58809]: Selected source 162.159.200.123 (pool.ntp.org)
Nov 29 06:15:11 compute-0 sshd-session[71614]: Accepted publickey for zuul from 38.102.83.107 port 45836 ssh2: RSA SHA256:MGJJb6X2bjkH8oWT85dgz2a/TwKBbh3/GDOWF3tnPlY
Nov 29 06:15:11 compute-0 systemd-logind[797]: New session 18 of user zuul.
Nov 29 06:15:11 compute-0 systemd[1]: Started Session 18 of User zuul.
Nov 29 06:15:11 compute-0 sshd-session[71614]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 06:15:11 compute-0 sudo[71690]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lihcvdrgmfyouerajqsaasgsxqnlwkuk ; /usr/bin/python3'
Nov 29 06:15:11 compute-0 sudo[71690]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:15:12 compute-0 useradd[71694]: new group: name=ceph-admin, GID=42478
Nov 29 06:15:12 compute-0 useradd[71694]: new user: name=ceph-admin, UID=42477, GID=42478, home=/home/ceph-admin, shell=/bin/bash, from=none
Nov 29 06:15:12 compute-0 sudo[71690]: pam_unix(sudo:session): session closed for user root
Nov 29 06:15:12 compute-0 sudo[71776]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vzzyuruiivfzukpqnqmqlsgkzaypbvpj ; /usr/bin/python3'
Nov 29 06:15:12 compute-0 sudo[71776]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:15:12 compute-0 sudo[71776]: pam_unix(sudo:session): session closed for user root
Nov 29 06:15:13 compute-0 sudo[71849]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bcrqmmebgtxmhkftrsuhzyhrogmtppow ; /usr/bin/python3'
Nov 29 06:15:13 compute-0 sudo[71849]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:15:13 compute-0 sudo[71849]: pam_unix(sudo:session): session closed for user root
Nov 29 06:15:13 compute-0 sudo[71899]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nzxwdqninnkjugicsqjydaybegxzhmgk ; /usr/bin/python3'
Nov 29 06:15:13 compute-0 sudo[71899]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:15:13 compute-0 sudo[71899]: pam_unix(sudo:session): session closed for user root
Nov 29 06:15:14 compute-0 sudo[71925]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oqdrastldcadoaotejbqpxgwkwydolbt ; /usr/bin/python3'
Nov 29 06:15:14 compute-0 sudo[71925]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:15:14 compute-0 sudo[71925]: pam_unix(sudo:session): session closed for user root
Nov 29 06:15:14 compute-0 sudo[71951]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kopeaobnmjvrqtzyyclyhonvzqphlzgh ; /usr/bin/python3'
Nov 29 06:15:14 compute-0 sudo[71951]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:15:14 compute-0 sudo[71951]: pam_unix(sudo:session): session closed for user root
Nov 29 06:15:15 compute-0 sudo[71977]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gkhhulododltisrucxssonupgzgpybma ; /usr/bin/python3'
Nov 29 06:15:15 compute-0 sudo[71977]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:15:15 compute-0 sudo[71977]: pam_unix(sudo:session): session closed for user root
Nov 29 06:15:15 compute-0 sudo[72055]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kznoembeljzahplmwrmckoamhkxitjdt ; /usr/bin/python3'
Nov 29 06:15:15 compute-0 sudo[72055]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:15:15 compute-0 sudo[72055]: pam_unix(sudo:session): session closed for user root
Nov 29 06:15:16 compute-0 sudo[72128]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nzdfnigxotwhiazwfnfiultbrbyztcfk ; /usr/bin/python3'
Nov 29 06:15:16 compute-0 sudo[72128]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:15:16 compute-0 sudo[72128]: pam_unix(sudo:session): session closed for user root
Nov 29 06:15:16 compute-0 sudo[72230]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-somiglqlnldqhbjgzfyyrltvpxstvvhl ; /usr/bin/python3'
Nov 29 06:15:16 compute-0 sudo[72230]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:15:16 compute-0 sudo[72230]: pam_unix(sudo:session): session closed for user root
Nov 29 06:15:17 compute-0 sudo[72303]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjnowgmfhmxsdxqlatrihdifanpaovgy ; /usr/bin/python3'
Nov 29 06:15:17 compute-0 sudo[72303]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:15:17 compute-0 sudo[72303]: pam_unix(sudo:session): session closed for user root
Nov 29 06:15:17 compute-0 sudo[72353]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-atsnwgpkeenzysojfgitwlyyzkenfenx ; /usr/bin/python3'
Nov 29 06:15:17 compute-0 sudo[72353]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:15:18 compute-0 python3[72355]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 06:15:19 compute-0 sudo[72353]: pam_unix(sudo:session): session closed for user root
Nov 29 06:15:19 compute-0 sudo[72448]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jqkgwiznapopyceldzqhxahilnvzxyhc ; /usr/bin/python3'
Nov 29 06:15:19 compute-0 sudo[72448]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:15:20 compute-0 python3[72450]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 29 06:15:21 compute-0 sudo[72448]: pam_unix(sudo:session): session closed for user root
Nov 29 06:15:21 compute-0 sudo[72475]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cimkbnxvsmypzqzmyqeehkkipaaluzjz ; /usr/bin/python3'
Nov 29 06:15:21 compute-0 sudo[72475]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:15:21 compute-0 python3[72477]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 29 06:15:21 compute-0 sudo[72475]: pam_unix(sudo:session): session closed for user root
Nov 29 06:15:22 compute-0 sudo[72501]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fkhpyzzfivhtllqgysrypigrnfiskgto ; /usr/bin/python3'
Nov 29 06:15:22 compute-0 sudo[72501]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:15:22 compute-0 python3[72503]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=7G
                                          losetup /dev/loop3 /var/lib/ceph-osd-0.img
                                          lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:15:22 compute-0 kernel: loop: module loaded
Nov 29 06:15:22 compute-0 kernel: loop3: detected capacity change from 0 to 14680064
Nov 29 06:15:22 compute-0 sudo[72501]: pam_unix(sudo:session): session closed for user root
Nov 29 06:15:22 compute-0 sudo[72536]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yqfgjyjyyshygagwvbkpqwhwxdksdadj ; /usr/bin/python3'
Nov 29 06:15:22 compute-0 sudo[72536]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:15:22 compute-0 python3[72538]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3
                                          vgcreate ceph_vg0 /dev/loop3
                                          lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0
                                          lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:15:22 compute-0 lvm[72541]: PV /dev/loop3 not used.
Nov 29 06:15:22 compute-0 lvm[72543]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 29 06:15:23 compute-0 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Nov 29 06:15:23 compute-0 lvm[72545]:   0 logical volume(s) in volume group "ceph_vg0" now active
Nov 29 06:15:23 compute-0 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Nov 29 06:15:23 compute-0 lvm[72553]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 29 06:15:23 compute-0 lvm[72553]: VG ceph_vg0 finished
Nov 29 06:15:23 compute-0 sudo[72536]: pam_unix(sudo:session): session closed for user root
Nov 29 06:15:24 compute-0 sudo[72630]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jzwxupxbpwdolumaiiszmjqlgximiqbz ; /usr/bin/python3'
Nov 29 06:15:24 compute-0 sudo[72630]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:15:24 compute-0 python3[72632]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 06:15:24 compute-0 sudo[72630]: pam_unix(sudo:session): session closed for user root
Nov 29 06:15:24 compute-0 sudo[72703]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dutptkewrjciwkpqvfrjykjeavpjfubq ; /usr/bin/python3'
Nov 29 06:15:24 compute-0 sudo[72703]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:15:24 compute-0 python3[72705]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764396923.9357212-37028-164864907491019/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:15:24 compute-0 sudo[72703]: pam_unix(sudo:session): session closed for user root
Nov 29 06:15:25 compute-0 sudo[72753]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gvadtzlxvotvlyuumxiylqfmsaoodsut ; /usr/bin/python3'
Nov 29 06:15:25 compute-0 sudo[72753]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:15:25 compute-0 python3[72755]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 06:15:25 compute-0 systemd[1]: Reloading.
Nov 29 06:15:25 compute-0 systemd-rc-local-generator[72785]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 06:15:25 compute-0 systemd-sysv-generator[72789]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 06:15:26 compute-0 systemd[1]: Starting Ceph OSD losetup...
Nov 29 06:15:26 compute-0 bash[72795]: /dev/loop3: [64513]:4194937 (/var/lib/ceph-osd-0.img)
Nov 29 06:15:26 compute-0 systemd[1]: Finished Ceph OSD losetup.
Nov 29 06:15:26 compute-0 sudo[72753]: pam_unix(sudo:session): session closed for user root
Nov 29 06:15:26 compute-0 lvm[72797]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 29 06:15:26 compute-0 lvm[72797]: VG ceph_vg0 finished
Nov 29 06:15:28 compute-0 python3[72821]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 06:15:30 compute-0 sshd-session[72877]: Received disconnect from 104.208.108.166 port 59208:11: Bye Bye [preauth]
Nov 29 06:15:30 compute-0 sshd-session[72877]: Disconnected from authenticating user root 104.208.108.166 port 59208 [preauth]
Nov 29 06:15:30 compute-0 sudo[72914]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qipbowbevxwafwbjdreiroygdonaqsel ; /usr/bin/python3'
Nov 29 06:15:30 compute-0 sudo[72914]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:15:30 compute-0 python3[72916]: ansible-ansible.legacy.dnf Invoked with name=['cephadm'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 29 06:15:32 compute-0 groupadd[72922]: group added to /etc/group: name=cephadm, GID=992
Nov 29 06:15:32 compute-0 groupadd[72922]: group added to /etc/gshadow: name=cephadm
Nov 29 06:15:32 compute-0 groupadd[72922]: new group: name=cephadm, GID=992
Nov 29 06:15:32 compute-0 useradd[72929]: new user: name=cephadm, UID=992, GID=992, home=/var/lib/cephadm, shell=/bin/bash, from=none
Nov 29 06:15:33 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 29 06:15:33 compute-0 systemd[1]: Starting man-db-cache-update.service...
Nov 29 06:15:33 compute-0 sudo[72914]: pam_unix(sudo:session): session closed for user root
Nov 29 06:15:33 compute-0 sudo[73024]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dxufdobqnwflcoznecrnzbntnnrziwaz ; /usr/bin/python3'
Nov 29 06:15:33 compute-0 sudo[73024]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:15:34 compute-0 python3[73026]: ansible-ansible.builtin.stat Invoked with path=/usr/sbin/cephadm follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 29 06:15:34 compute-0 sudo[73024]: pam_unix(sudo:session): session closed for user root
Nov 29 06:15:34 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 29 06:15:34 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 29 06:15:34 compute-0 systemd[1]: run-r0c835895f1bb477fa6c9af610f15c51f.service: Deactivated successfully.
Nov 29 06:15:34 compute-0 sudo[73053]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wtbekxfypzdhgtowtlqluafgfctctnle ; /usr/bin/python3'
Nov 29 06:15:34 compute-0 sudo[73053]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:15:34 compute-0 python3[73055]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm ls --no-detail _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:15:34 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 06:15:34 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 06:15:34 compute-0 sudo[73053]: pam_unix(sudo:session): session closed for user root
Nov 29 06:15:35 compute-0 sudo[73116]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cwsqaiorqlbdfkqjewobgpgeaukzippe ; /usr/bin/python3'
Nov 29 06:15:35 compute-0 sudo[73116]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:15:35 compute-0 python3[73118]: ansible-ansible.builtin.file Invoked with path=/etc/ceph state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:15:35 compute-0 sudo[73116]: pam_unix(sudo:session): session closed for user root
Nov 29 06:15:35 compute-0 sudo[73142]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ckzoijdhemvonqmzxerxnjkdzftagtdr ; /usr/bin/python3'
Nov 29 06:15:35 compute-0 sudo[73142]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:15:35 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 06:15:35 compute-0 python3[73144]: ansible-ansible.builtin.file Invoked with path=/home/ceph-admin/specs owner=ceph-admin group=ceph-admin mode=0755 state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:15:35 compute-0 sudo[73142]: pam_unix(sudo:session): session closed for user root
Nov 29 06:15:36 compute-0 sudo[73220]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ywucugzfqphxcutneejtsgbmqwevnpsn ; /usr/bin/python3'
Nov 29 06:15:36 compute-0 sudo[73220]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:15:36 compute-0 python3[73222]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 06:15:36 compute-0 sudo[73220]: pam_unix(sudo:session): session closed for user root
Nov 29 06:15:36 compute-0 sudo[73293]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjtnseeynndxnyirklzvyjfuudwiggqa ; /usr/bin/python3'
Nov 29 06:15:36 compute-0 sudo[73293]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:15:36 compute-0 python3[73295]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764396936.192666-37220-84635208453874/source dest=/home/ceph-admin/specs/ceph_spec.yaml owner=ceph-admin group=ceph-admin mode=0644 _original_basename=ceph_spec.yml follow=False checksum=a2c84611a4e46cfce32a90c112eae0345cab6abb backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:15:36 compute-0 sudo[73293]: pam_unix(sudo:session): session closed for user root
Nov 29 06:15:37 compute-0 sudo[73395]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rcukzucsdeabmpbzuespjvizfoeknvhm ; /usr/bin/python3'
Nov 29 06:15:37 compute-0 sudo[73395]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:15:37 compute-0 python3[73397]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 06:15:37 compute-0 sudo[73395]: pam_unix(sudo:session): session closed for user root
Nov 29 06:15:38 compute-0 sudo[73468]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xlvqbcuanjyrxotnaoyorkdjohmaqkjs ; /usr/bin/python3'
Nov 29 06:15:38 compute-0 sudo[73468]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:15:38 compute-0 python3[73470]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764396937.4645936-37238-279326176779959/source dest=/home/ceph-admin/assimilate_ceph.conf owner=ceph-admin group=ceph-admin mode=0644 _original_basename=initial_ceph.conf follow=False checksum=41828f7c2442fdf376911255e33c12863fc3b1b3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:15:38 compute-0 sudo[73468]: pam_unix(sudo:session): session closed for user root
Nov 29 06:15:38 compute-0 sudo[73518]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qhxqcdiligsojfvsvftettffyqqobedx ; /usr/bin/python3'
Nov 29 06:15:38 compute-0 sudo[73518]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:15:38 compute-0 python3[73520]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 29 06:15:38 compute-0 sudo[73518]: pam_unix(sudo:session): session closed for user root
Nov 29 06:15:38 compute-0 sudo[73546]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qeiwwluunenppfuozyxdgkvqqpdifkdp ; /usr/bin/python3'
Nov 29 06:15:38 compute-0 sudo[73546]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:15:39 compute-0 python3[73548]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa.pub follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 29 06:15:39 compute-0 sudo[73546]: pam_unix(sudo:session): session closed for user root
Nov 29 06:15:39 compute-0 sudo[73574]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yfrstiddabuoacvupwsjrrqrotitvnyk ; /usr/bin/python3'
Nov 29 06:15:39 compute-0 sudo[73574]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:15:39 compute-0 python3[73576]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 29 06:15:39 compute-0 sudo[73574]: pam_unix(sudo:session): session closed for user root
Nov 29 06:15:39 compute-0 sudo[73602]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rwgtfohaykvgadhpnaaadpwtbqzfjeub ; /usr/bin/python3'
Nov 29 06:15:39 compute-0 sudo[73602]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:15:40 compute-0 python3[73604]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm bootstrap --skip-firewalld --skip-prepare-host --ssh-private-key /home/ceph-admin/.ssh/id_rsa --ssh-public-key /home/ceph-admin/.ssh/id_rsa.pub --ssh-user ceph-admin --allow-fqdn-hostname --output-keyring /etc/ceph/ceph.client.admin.keyring --output-config /etc/ceph/ceph.conf --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 --config /home/ceph-admin/assimilate_ceph.conf \--skip-monitoring-stack --skip-dashboard --mon-ip 192.168.122.100
                                           _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:15:40 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 06:15:40 compute-0 sshd-session[73621]: Accepted publickey for ceph-admin from 192.168.122.100 port 43042 ssh2: RSA SHA256:wSO38gUigzg+3qmbq5ZCXhMSnm1ow+14BbAXfOugcIA
Nov 29 06:15:40 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Nov 29 06:15:40 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Nov 29 06:15:40 compute-0 systemd-logind[797]: New session 19 of user ceph-admin.
Nov 29 06:15:40 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Nov 29 06:15:40 compute-0 systemd[1]: Starting User Manager for UID 42477...
Nov 29 06:15:40 compute-0 systemd[73625]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 29 06:15:40 compute-0 systemd[73625]: Queued start job for default target Main User Target.
Nov 29 06:15:40 compute-0 systemd[73625]: Created slice User Application Slice.
Nov 29 06:15:40 compute-0 systemd[73625]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 29 06:15:40 compute-0 systemd[73625]: Started Daily Cleanup of User's Temporary Directories.
Nov 29 06:15:40 compute-0 systemd[73625]: Reached target Paths.
Nov 29 06:15:40 compute-0 systemd[73625]: Reached target Timers.
Nov 29 06:15:40 compute-0 systemd[73625]: Starting D-Bus User Message Bus Socket...
Nov 29 06:15:40 compute-0 systemd[73625]: Starting Create User's Volatile Files and Directories...
Nov 29 06:15:40 compute-0 systemd[73625]: Listening on D-Bus User Message Bus Socket.
Nov 29 06:15:40 compute-0 systemd[73625]: Reached target Sockets.
Nov 29 06:15:40 compute-0 systemd[73625]: Finished Create User's Volatile Files and Directories.
Nov 29 06:15:40 compute-0 systemd[73625]: Reached target Basic System.
Nov 29 06:15:40 compute-0 systemd[73625]: Reached target Main User Target.
Nov 29 06:15:40 compute-0 systemd[73625]: Startup finished in 160ms.
Nov 29 06:15:40 compute-0 systemd[1]: Started User Manager for UID 42477.
Nov 29 06:15:40 compute-0 systemd[1]: Started Session 19 of User ceph-admin.
Nov 29 06:15:40 compute-0 sshd-session[73621]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 29 06:15:40 compute-0 sudo[73642]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/echo
Nov 29 06:15:40 compute-0 sudo[73642]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:15:40 compute-0 sudo[73642]: pam_unix(sudo:session): session closed for user root
Nov 29 06:15:40 compute-0 sshd-session[73641]: Received disconnect from 192.168.122.100 port 43042:11: disconnected by user
Nov 29 06:15:40 compute-0 sshd-session[73641]: Disconnected from user ceph-admin 192.168.122.100 port 43042
Nov 29 06:15:40 compute-0 sshd-session[73621]: pam_unix(sshd:session): session closed for user ceph-admin
Nov 29 06:15:40 compute-0 systemd-logind[797]: Session 19 logged out. Waiting for processes to exit.
Nov 29 06:15:40 compute-0 systemd[1]: session-19.scope: Deactivated successfully.
Nov 29 06:15:40 compute-0 systemd-logind[797]: Removed session 19.
Nov 29 06:15:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat1725561353-lower\x2dmapped.mount: Deactivated successfully.
Nov 29 06:15:48 compute-0 sshd-session[73737]: Invalid user marvin from 79.116.35.29 port 53012
Nov 29 06:15:48 compute-0 sshd-session[73737]: Received disconnect from 79.116.35.29 port 53012:11: Bye Bye [preauth]
Nov 29 06:15:48 compute-0 sshd-session[73737]: Disconnected from invalid user marvin 79.116.35.29 port 53012 [preauth]
Nov 29 06:15:51 compute-0 systemd[1]: Stopping User Manager for UID 42477...
Nov 29 06:15:51 compute-0 systemd[73625]: Activating special unit Exit the Session...
Nov 29 06:15:51 compute-0 systemd[73625]: Stopped target Main User Target.
Nov 29 06:15:51 compute-0 systemd[73625]: Stopped target Basic System.
Nov 29 06:15:51 compute-0 systemd[73625]: Stopped target Paths.
Nov 29 06:15:51 compute-0 systemd[73625]: Stopped target Sockets.
Nov 29 06:15:51 compute-0 systemd[73625]: Stopped target Timers.
Nov 29 06:15:51 compute-0 systemd[73625]: Stopped Mark boot as successful after the user session has run 2 minutes.
Nov 29 06:15:51 compute-0 systemd[73625]: Stopped Daily Cleanup of User's Temporary Directories.
Nov 29 06:15:51 compute-0 systemd[73625]: Closed D-Bus User Message Bus Socket.
Nov 29 06:15:51 compute-0 systemd[73625]: Stopped Create User's Volatile Files and Directories.
Nov 29 06:15:51 compute-0 systemd[73625]: Removed slice User Application Slice.
Nov 29 06:15:51 compute-0 systemd[73625]: Reached target Shutdown.
Nov 29 06:15:51 compute-0 systemd[73625]: Finished Exit the Session.
Nov 29 06:15:51 compute-0 systemd[73625]: Reached target Exit the Session.
Nov 29 06:15:51 compute-0 systemd[1]: user@42477.service: Deactivated successfully.
Nov 29 06:15:51 compute-0 systemd[1]: Stopped User Manager for UID 42477.
Nov 29 06:15:51 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Nov 29 06:15:51 compute-0 systemd[1]: run-user-42477.mount: Deactivated successfully.
Nov 29 06:15:51 compute-0 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Nov 29 06:15:51 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Nov 29 06:15:51 compute-0 systemd[1]: Removed slice User Slice of UID 42477.
Nov 29 06:15:59 compute-0 podman[73679]: 2025-11-29 06:15:59.995662999 +0000 UTC m=+19.123457298 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 06:15:59 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 06:16:00 compute-0 podman[73770]: 2025-11-29 06:16:00.073128117 +0000 UTC m=+0.046416358 container create acff35728a6edbac9c5bcb27012457000df2f7ad48ffe8d0bac113c10fdf0425 (image=quay.io/ceph/ceph:v18, name=gallant_bohr, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 29 06:16:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-volatile\x2dcheck1622958540-merged.mount: Deactivated successfully.
Nov 29 06:16:00 compute-0 systemd[1]: Created slice Virtual Machine and Container Slice.
Nov 29 06:16:00 compute-0 systemd[1]: Started libpod-conmon-acff35728a6edbac9c5bcb27012457000df2f7ad48ffe8d0bac113c10fdf0425.scope.
Nov 29 06:16:00 compute-0 podman[73770]: 2025-11-29 06:16:00.05139746 +0000 UTC m=+0.024685721 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 06:16:00 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:16:00 compute-0 podman[73770]: 2025-11-29 06:16:00.180961216 +0000 UTC m=+0.154249457 container init acff35728a6edbac9c5bcb27012457000df2f7ad48ffe8d0bac113c10fdf0425 (image=quay.io/ceph/ceph:v18, name=gallant_bohr, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 29 06:16:00 compute-0 podman[73770]: 2025-11-29 06:16:00.189338534 +0000 UTC m=+0.162626785 container start acff35728a6edbac9c5bcb27012457000df2f7ad48ffe8d0bac113c10fdf0425 (image=quay.io/ceph/ceph:v18, name=gallant_bohr, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 06:16:00 compute-0 podman[73770]: 2025-11-29 06:16:00.192935146 +0000 UTC m=+0.166223407 container attach acff35728a6edbac9c5bcb27012457000df2f7ad48ffe8d0bac113c10fdf0425 (image=quay.io/ceph/ceph:v18, name=gallant_bohr, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 06:16:00 compute-0 gallant_bohr[73786]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)
Nov 29 06:16:00 compute-0 systemd[1]: libpod-acff35728a6edbac9c5bcb27012457000df2f7ad48ffe8d0bac113c10fdf0425.scope: Deactivated successfully.
Nov 29 06:16:00 compute-0 podman[73770]: 2025-11-29 06:16:00.491333052 +0000 UTC m=+0.464621343 container died acff35728a6edbac9c5bcb27012457000df2f7ad48ffe8d0bac113c10fdf0425 (image=quay.io/ceph/ceph:v18, name=gallant_bohr, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 29 06:16:00 compute-0 podman[73770]: 2025-11-29 06:16:00.547167336 +0000 UTC m=+0.520455607 container remove acff35728a6edbac9c5bcb27012457000df2f7ad48ffe8d0bac113c10fdf0425 (image=quay.io/ceph/ceph:v18, name=gallant_bohr, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS)
Nov 29 06:16:00 compute-0 systemd[1]: libpod-conmon-acff35728a6edbac9c5bcb27012457000df2f7ad48ffe8d0bac113c10fdf0425.scope: Deactivated successfully.
Nov 29 06:16:00 compute-0 podman[73802]: 2025-11-29 06:16:00.623953384 +0000 UTC m=+0.053787977 container create cc8b18ae3c2e99ddc33877647f9dcf1d894fdca8e80dc62659ad6f54946e6e11 (image=quay.io/ceph/ceph:v18, name=funny_jemison, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 29 06:16:00 compute-0 systemd[1]: Started libpod-conmon-cc8b18ae3c2e99ddc33877647f9dcf1d894fdca8e80dc62659ad6f54946e6e11.scope.
Nov 29 06:16:00 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:16:00 compute-0 podman[73802]: 2025-11-29 06:16:00.680836058 +0000 UTC m=+0.110670701 container init cc8b18ae3c2e99ddc33877647f9dcf1d894fdca8e80dc62659ad6f54946e6e11 (image=quay.io/ceph/ceph:v18, name=funny_jemison, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 29 06:16:00 compute-0 podman[73802]: 2025-11-29 06:16:00.685219282 +0000 UTC m=+0.115053885 container start cc8b18ae3c2e99ddc33877647f9dcf1d894fdca8e80dc62659ad6f54946e6e11 (image=quay.io/ceph/ceph:v18, name=funny_jemison, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 29 06:16:00 compute-0 podman[73802]: 2025-11-29 06:16:00.688347551 +0000 UTC m=+0.118182164 container attach cc8b18ae3c2e99ddc33877647f9dcf1d894fdca8e80dc62659ad6f54946e6e11 (image=quay.io/ceph/ceph:v18, name=funny_jemison, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 06:16:00 compute-0 podman[73802]: 2025-11-29 06:16:00.595827806 +0000 UTC m=+0.025662519 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 06:16:00 compute-0 funny_jemison[73819]: 167 167
Nov 29 06:16:00 compute-0 systemd[1]: libpod-cc8b18ae3c2e99ddc33877647f9dcf1d894fdca8e80dc62659ad6f54946e6e11.scope: Deactivated successfully.
Nov 29 06:16:00 compute-0 podman[73802]: 2025-11-29 06:16:00.690514723 +0000 UTC m=+0.120349326 container died cc8b18ae3c2e99ddc33877647f9dcf1d894fdca8e80dc62659ad6f54946e6e11 (image=quay.io/ceph/ceph:v18, name=funny_jemison, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 06:16:00 compute-0 podman[73802]: 2025-11-29 06:16:00.731993069 +0000 UTC m=+0.161827702 container remove cc8b18ae3c2e99ddc33877647f9dcf1d894fdca8e80dc62659ad6f54946e6e11 (image=quay.io/ceph/ceph:v18, name=funny_jemison, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 29 06:16:00 compute-0 systemd[1]: libpod-conmon-cc8b18ae3c2e99ddc33877647f9dcf1d894fdca8e80dc62659ad6f54946e6e11.scope: Deactivated successfully.
Nov 29 06:16:00 compute-0 podman[73837]: 2025-11-29 06:16:00.799084763 +0000 UTC m=+0.040713636 container create e323f52e3e9555e8671714f9649357986256e9d2c64c9fa6dbab07b1887223d0 (image=quay.io/ceph/ceph:v18, name=trusting_babbage, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 29 06:16:00 compute-0 systemd[1]: Started libpod-conmon-e323f52e3e9555e8671714f9649357986256e9d2c64c9fa6dbab07b1887223d0.scope.
Nov 29 06:16:00 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:16:00 compute-0 podman[73837]: 2025-11-29 06:16:00.865056214 +0000 UTC m=+0.106685077 container init e323f52e3e9555e8671714f9649357986256e9d2c64c9fa6dbab07b1887223d0 (image=quay.io/ceph/ceph:v18, name=trusting_babbage, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 29 06:16:00 compute-0 podman[73837]: 2025-11-29 06:16:00.871027884 +0000 UTC m=+0.112656767 container start e323f52e3e9555e8671714f9649357986256e9d2c64c9fa6dbab07b1887223d0 (image=quay.io/ceph/ceph:v18, name=trusting_babbage, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 06:16:00 compute-0 podman[73837]: 2025-11-29 06:16:00.779136357 +0000 UTC m=+0.020765240 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 06:16:00 compute-0 podman[73837]: 2025-11-29 06:16:00.875725257 +0000 UTC m=+0.117354130 container attach e323f52e3e9555e8671714f9649357986256e9d2c64c9fa6dbab07b1887223d0 (image=quay.io/ceph/ceph:v18, name=trusting_babbage, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True)
Nov 29 06:16:00 compute-0 trusting_babbage[73855]: AQCgjypp69I3NhAAR2bMWBw4r8XowKCsVsHPQw==
Nov 29 06:16:00 compute-0 systemd[1]: libpod-e323f52e3e9555e8671714f9649357986256e9d2c64c9fa6dbab07b1887223d0.scope: Deactivated successfully.
Nov 29 06:16:00 compute-0 podman[73837]: 2025-11-29 06:16:00.914054975 +0000 UTC m=+0.155683818 container died e323f52e3e9555e8671714f9649357986256e9d2c64c9fa6dbab07b1887223d0 (image=quay.io/ceph/ceph:v18, name=trusting_babbage, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 06:16:00 compute-0 podman[73837]: 2025-11-29 06:16:00.952546977 +0000 UTC m=+0.194175820 container remove e323f52e3e9555e8671714f9649357986256e9d2c64c9fa6dbab07b1887223d0 (image=quay.io/ceph/ceph:v18, name=trusting_babbage, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 29 06:16:00 compute-0 systemd[1]: libpod-conmon-e323f52e3e9555e8671714f9649357986256e9d2c64c9fa6dbab07b1887223d0.scope: Deactivated successfully.
Nov 29 06:16:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-1bb652b2846f5f6d97c8292a070fdb4a9590a81fb766a576419b5b0ebf30613e-merged.mount: Deactivated successfully.
Nov 29 06:16:01 compute-0 podman[73874]: 2025-11-29 06:16:01.057170595 +0000 UTC m=+0.073587779 container create 35f35919f9f565e8a2aac5c2a13bb6bb9f93aad0087ec296eba1851ee33db6b0 (image=quay.io/ceph/ceph:v18, name=epic_dubinsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 06:16:01 compute-0 systemd[1]: Started libpod-conmon-35f35919f9f565e8a2aac5c2a13bb6bb9f93aad0087ec296eba1851ee33db6b0.scope.
Nov 29 06:16:01 compute-0 podman[73874]: 2025-11-29 06:16:01.029324955 +0000 UTC m=+0.045742179 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 06:16:01 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:16:01 compute-0 podman[73874]: 2025-11-29 06:16:01.15745546 +0000 UTC m=+0.173872614 container init 35f35919f9f565e8a2aac5c2a13bb6bb9f93aad0087ec296eba1851ee33db6b0 (image=quay.io/ceph/ceph:v18, name=epic_dubinsky, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 06:16:01 compute-0 podman[73874]: 2025-11-29 06:16:01.163637685 +0000 UTC m=+0.180054869 container start 35f35919f9f565e8a2aac5c2a13bb6bb9f93aad0087ec296eba1851ee33db6b0 (image=quay.io/ceph/ceph:v18, name=epic_dubinsky, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 29 06:16:01 compute-0 podman[73874]: 2025-11-29 06:16:01.167834844 +0000 UTC m=+0.184252008 container attach 35f35919f9f565e8a2aac5c2a13bb6bb9f93aad0087ec296eba1851ee33db6b0 (image=quay.io/ceph/ceph:v18, name=epic_dubinsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 06:16:01 compute-0 epic_dubinsky[73890]: AQChjyppOUhwCxAADdGewaDdp9HBbsTf1aZPoQ==
Nov 29 06:16:01 compute-0 systemd[1]: libpod-35f35919f9f565e8a2aac5c2a13bb6bb9f93aad0087ec296eba1851ee33db6b0.scope: Deactivated successfully.
Nov 29 06:16:01 compute-0 podman[73874]: 2025-11-29 06:16:01.198458473 +0000 UTC m=+0.214875657 container died 35f35919f9f565e8a2aac5c2a13bb6bb9f93aad0087ec296eba1851ee33db6b0 (image=quay.io/ceph/ceph:v18, name=epic_dubinsky, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 06:16:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-c22e25198297cf37cc6f3df5aad6148246e65c32ea74214eef98a0a4761b1ba5-merged.mount: Deactivated successfully.
Nov 29 06:16:01 compute-0 podman[73874]: 2025-11-29 06:16:01.24556302 +0000 UTC m=+0.261980204 container remove 35f35919f9f565e8a2aac5c2a13bb6bb9f93aad0087ec296eba1851ee33db6b0 (image=quay.io/ceph/ceph:v18, name=epic_dubinsky, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 29 06:16:01 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 06:16:01 compute-0 systemd[1]: libpod-conmon-35f35919f9f565e8a2aac5c2a13bb6bb9f93aad0087ec296eba1851ee33db6b0.scope: Deactivated successfully.
Nov 29 06:16:01 compute-0 podman[73908]: 2025-11-29 06:16:01.349979082 +0000 UTC m=+0.075292607 container create 9d226f4d8fc530cfb1623e5a5fc5a9d75b61ffda7727bebfbe085fc54e866210 (image=quay.io/ceph/ceph:v18, name=jovial_thompson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 06:16:01 compute-0 systemd[1]: Started libpod-conmon-9d226f4d8fc530cfb1623e5a5fc5a9d75b61ffda7727bebfbe085fc54e866210.scope.
Nov 29 06:16:01 compute-0 podman[73908]: 2025-11-29 06:16:01.311816869 +0000 UTC m=+0.037130444 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 06:16:01 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:16:01 compute-0 podman[73908]: 2025-11-29 06:16:01.444048631 +0000 UTC m=+0.169362166 container init 9d226f4d8fc530cfb1623e5a5fc5a9d75b61ffda7727bebfbe085fc54e866210 (image=quay.io/ceph/ceph:v18, name=jovial_thompson, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 29 06:16:01 compute-0 podman[73908]: 2025-11-29 06:16:01.453254762 +0000 UTC m=+0.178568257 container start 9d226f4d8fc530cfb1623e5a5fc5a9d75b61ffda7727bebfbe085fc54e866210 (image=quay.io/ceph/ceph:v18, name=jovial_thompson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 06:16:01 compute-0 podman[73908]: 2025-11-29 06:16:01.45740585 +0000 UTC m=+0.182719365 container attach 9d226f4d8fc530cfb1623e5a5fc5a9d75b61ffda7727bebfbe085fc54e866210 (image=quay.io/ceph/ceph:v18, name=jovial_thompson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 06:16:01 compute-0 jovial_thompson[73924]: AQChjypp97uGHBAAchmJk9cEjMyNqQhaf4l4Xw==
Nov 29 06:16:01 compute-0 systemd[1]: libpod-9d226f4d8fc530cfb1623e5a5fc5a9d75b61ffda7727bebfbe085fc54e866210.scope: Deactivated successfully.
Nov 29 06:16:01 compute-0 podman[73908]: 2025-11-29 06:16:01.482583254 +0000 UTC m=+0.207896759 container died 9d226f4d8fc530cfb1623e5a5fc5a9d75b61ffda7727bebfbe085fc54e866210 (image=quay.io/ceph/ceph:v18, name=jovial_thompson, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 29 06:16:01 compute-0 podman[73908]: 2025-11-29 06:16:01.517978388 +0000 UTC m=+0.243291873 container remove 9d226f4d8fc530cfb1623e5a5fc5a9d75b61ffda7727bebfbe085fc54e866210 (image=quay.io/ceph/ceph:v18, name=jovial_thompson, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 06:16:01 compute-0 systemd[1]: libpod-conmon-9d226f4d8fc530cfb1623e5a5fc5a9d75b61ffda7727bebfbe085fc54e866210.scope: Deactivated successfully.
Nov 29 06:16:01 compute-0 podman[73941]: 2025-11-29 06:16:01.612081768 +0000 UTC m=+0.063292517 container create 3982cc6f4bab531c36a4862c49c9053dba07a2cd12da64fbc6b5936a916d631c (image=quay.io/ceph/ceph:v18, name=stoic_cannon, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 29 06:16:01 compute-0 systemd[1]: Started libpod-conmon-3982cc6f4bab531c36a4862c49c9053dba07a2cd12da64fbc6b5936a916d631c.scope.
Nov 29 06:16:01 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:16:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c796d98103d2ba3058ed8d158cdae282291c4cf023038ab09440abdcfe11d28a/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Nov 29 06:16:01 compute-0 podman[73941]: 2025-11-29 06:16:01.68123223 +0000 UTC m=+0.132442989 container init 3982cc6f4bab531c36a4862c49c9053dba07a2cd12da64fbc6b5936a916d631c (image=quay.io/ceph/ceph:v18, name=stoic_cannon, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Nov 29 06:16:01 compute-0 podman[73941]: 2025-11-29 06:16:01.591482393 +0000 UTC m=+0.042693132 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 06:16:01 compute-0 podman[73941]: 2025-11-29 06:16:01.687632431 +0000 UTC m=+0.138843150 container start 3982cc6f4bab531c36a4862c49c9053dba07a2cd12da64fbc6b5936a916d631c (image=quay.io/ceph/ceph:v18, name=stoic_cannon, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 29 06:16:01 compute-0 podman[73941]: 2025-11-29 06:16:01.691544682 +0000 UTC m=+0.142755441 container attach 3982cc6f4bab531c36a4862c49c9053dba07a2cd12da64fbc6b5936a916d631c (image=quay.io/ceph/ceph:v18, name=stoic_cannon, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 06:16:01 compute-0 stoic_cannon[73957]: /usr/bin/monmaptool: monmap file /tmp/monmap
Nov 29 06:16:01 compute-0 stoic_cannon[73957]: setting min_mon_release = pacific
Nov 29 06:16:01 compute-0 stoic_cannon[73957]: /usr/bin/monmaptool: set fsid to 336ec58c-893b-528f-a0c1-6ed1196bc047
Nov 29 06:16:01 compute-0 stoic_cannon[73957]: /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors)
Nov 29 06:16:01 compute-0 systemd[1]: libpod-3982cc6f4bab531c36a4862c49c9053dba07a2cd12da64fbc6b5936a916d631c.scope: Deactivated successfully.
Nov 29 06:16:01 compute-0 podman[73941]: 2025-11-29 06:16:01.729661194 +0000 UTC m=+0.180871943 container died 3982cc6f4bab531c36a4862c49c9053dba07a2cd12da64fbc6b5936a916d631c (image=quay.io/ceph/ceph:v18, name=stoic_cannon, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0)
Nov 29 06:16:01 compute-0 podman[73941]: 2025-11-29 06:16:01.770820972 +0000 UTC m=+0.222031701 container remove 3982cc6f4bab531c36a4862c49c9053dba07a2cd12da64fbc6b5936a916d631c (image=quay.io/ceph/ceph:v18, name=stoic_cannon, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 06:16:01 compute-0 systemd[1]: libpod-conmon-3982cc6f4bab531c36a4862c49c9053dba07a2cd12da64fbc6b5936a916d631c.scope: Deactivated successfully.
Nov 29 06:16:01 compute-0 podman[73976]: 2025-11-29 06:16:01.850161892 +0000 UTC m=+0.051848161 container create c83c2ed855b173aadf1d319a62f75d21842951e657aa3ada22f5bbb3a6239fb3 (image=quay.io/ceph/ceph:v18, name=jovial_perlman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 29 06:16:01 compute-0 systemd[1]: Started libpod-conmon-c83c2ed855b173aadf1d319a62f75d21842951e657aa3ada22f5bbb3a6239fb3.scope.
Nov 29 06:16:01 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:16:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8248aea0cf649fd1c016102ec2d70f20399a7f5cb44c6aae16891f2fecb6e87/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 06:16:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8248aea0cf649fd1c016102ec2d70f20399a7f5cb44c6aae16891f2fecb6e87/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Nov 29 06:16:01 compute-0 podman[73976]: 2025-11-29 06:16:01.828017664 +0000 UTC m=+0.029703973 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 06:16:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8248aea0cf649fd1c016102ec2d70f20399a7f5cb44c6aae16891f2fecb6e87/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:16:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8248aea0cf649fd1c016102ec2d70f20399a7f5cb44c6aae16891f2fecb6e87/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 29 06:16:01 compute-0 podman[73976]: 2025-11-29 06:16:01.944819038 +0000 UTC m=+0.146505327 container init c83c2ed855b173aadf1d319a62f75d21842951e657aa3ada22f5bbb3a6239fb3 (image=quay.io/ceph/ceph:v18, name=jovial_perlman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 06:16:01 compute-0 podman[73976]: 2025-11-29 06:16:01.950919011 +0000 UTC m=+0.152605290 container start c83c2ed855b173aadf1d319a62f75d21842951e657aa3ada22f5bbb3a6239fb3 (image=quay.io/ceph/ceph:v18, name=jovial_perlman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 06:16:01 compute-0 podman[73976]: 2025-11-29 06:16:01.954858673 +0000 UTC m=+0.156544942 container attach c83c2ed855b173aadf1d319a62f75d21842951e657aa3ada22f5bbb3a6239fb3 (image=quay.io/ceph/ceph:v18, name=jovial_perlman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 06:16:02 compute-0 systemd[1]: libpod-c83c2ed855b173aadf1d319a62f75d21842951e657aa3ada22f5bbb3a6239fb3.scope: Deactivated successfully.
Nov 29 06:16:02 compute-0 podman[73976]: 2025-11-29 06:16:02.052063401 +0000 UTC m=+0.253749750 container died c83c2ed855b173aadf1d319a62f75d21842951e657aa3ada22f5bbb3a6239fb3 (image=quay.io/ceph/ceph:v18, name=jovial_perlman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 06:16:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-c8248aea0cf649fd1c016102ec2d70f20399a7f5cb44c6aae16891f2fecb6e87-merged.mount: Deactivated successfully.
Nov 29 06:16:02 compute-0 podman[73976]: 2025-11-29 06:16:02.10490816 +0000 UTC m=+0.306594459 container remove c83c2ed855b173aadf1d319a62f75d21842951e657aa3ada22f5bbb3a6239fb3 (image=quay.io/ceph/ceph:v18, name=jovial_perlman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 06:16:02 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 06:16:02 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 06:16:02 compute-0 systemd[1]: libpod-conmon-c83c2ed855b173aadf1d319a62f75d21842951e657aa3ada22f5bbb3a6239fb3.scope: Deactivated successfully.
Nov 29 06:16:02 compute-0 systemd[1]: Reloading.
Nov 29 06:16:02 compute-0 systemd-rc-local-generator[74060]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 06:16:02 compute-0 systemd-sysv-generator[74063]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 06:16:02 compute-0 systemd[1]: Reloading.
Nov 29 06:16:02 compute-0 systemd-rc-local-generator[74094]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 06:16:02 compute-0 systemd-sysv-generator[74100]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 06:16:02 compute-0 systemd[1]: Reached target All Ceph clusters and services.
Nov 29 06:16:02 compute-0 systemd[1]: Reloading.
Nov 29 06:16:02 compute-0 systemd-rc-local-generator[74135]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 06:16:02 compute-0 systemd-sysv-generator[74139]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 06:16:02 compute-0 systemd[1]: Reached target Ceph cluster 336ec58c-893b-528f-a0c1-6ed1196bc047.
Nov 29 06:16:02 compute-0 systemd[1]: Reloading.
Nov 29 06:16:02 compute-0 systemd-rc-local-generator[74175]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 06:16:02 compute-0 systemd-sysv-generator[74179]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 06:16:03 compute-0 systemd[1]: Reloading.
Nov 29 06:16:03 compute-0 systemd-sysv-generator[74217]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 06:16:03 compute-0 systemd-rc-local-generator[74211]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 06:16:03 compute-0 systemd[1]: Created slice Slice /system/ceph-336ec58c-893b-528f-a0c1-6ed1196bc047.
Nov 29 06:16:03 compute-0 systemd[1]: Reached target System Time Set.
Nov 29 06:16:03 compute-0 systemd[1]: Reached target System Time Synchronized.
Nov 29 06:16:03 compute-0 systemd[1]: Starting Ceph mon.compute-0 for 336ec58c-893b-528f-a0c1-6ed1196bc047...
Nov 29 06:16:03 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 06:16:03 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 06:16:03 compute-0 podman[74272]: 2025-11-29 06:16:03.748281502 +0000 UTC m=+0.058576053 container create 7dad2a0c9576d9ed265ee38fcd17a68df8cb8e5f59cf0de18ae06a6c8fff3d4e (image=quay.io/ceph/ceph:v18, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 06:16:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d30c27b6a84d460a1022682dab7ad6135e30f0b4d9feda45deee56876583f7e7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:16:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d30c27b6a84d460a1022682dab7ad6135e30f0b4d9feda45deee56876583f7e7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:16:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d30c27b6a84d460a1022682dab7ad6135e30f0b4d9feda45deee56876583f7e7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 06:16:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d30c27b6a84d460a1022682dab7ad6135e30f0b4d9feda45deee56876583f7e7/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 29 06:16:03 compute-0 podman[74272]: 2025-11-29 06:16:03.816367304 +0000 UTC m=+0.126661865 container init 7dad2a0c9576d9ed265ee38fcd17a68df8cb8e5f59cf0de18ae06a6c8fff3d4e (image=quay.io/ceph/ceph:v18, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 29 06:16:03 compute-0 podman[74272]: 2025-11-29 06:16:03.728918793 +0000 UTC m=+0.039213354 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 06:16:03 compute-0 podman[74272]: 2025-11-29 06:16:03.831209085 +0000 UTC m=+0.141503616 container start 7dad2a0c9576d9ed265ee38fcd17a68df8cb8e5f59cf0de18ae06a6c8fff3d4e (image=quay.io/ceph/ceph:v18, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mon-compute-0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 29 06:16:03 compute-0 bash[74272]: 7dad2a0c9576d9ed265ee38fcd17a68df8cb8e5f59cf0de18ae06a6c8fff3d4e
Nov 29 06:16:03 compute-0 systemd[1]: Started Ceph mon.compute-0 for 336ec58c-893b-528f-a0c1-6ed1196bc047.
Nov 29 06:16:03 compute-0 ceph-mon[74293]: set uid:gid to 167:167 (ceph:ceph)
Nov 29 06:16:03 compute-0 ceph-mon[74293]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mon, pid 2
Nov 29 06:16:03 compute-0 ceph-mon[74293]: pidfile_write: ignore empty --pid-file
Nov 29 06:16:03 compute-0 ceph-mon[74293]: load: jerasure load: lrc 
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb: RocksDB version: 7.9.2
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb: Git sha 0
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb: DB SUMMARY
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb: DB Session ID:  TJX3Q57MMVQ4ZHTA4ZSA
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb: CURRENT file:  CURRENT
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb: IDENTITY file:  IDENTITY
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 0, files: 
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000004.log size: 807 ; 
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:                         Options.error_if_exists: 0
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:                       Options.create_if_missing: 0
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:                         Options.paranoid_checks: 1
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:                                     Options.env: 0x55bdf897dc40
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:                                      Options.fs: PosixFileSystem
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:                                Options.info_log: 0x55bdf97e0ec0
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:                Options.max_file_opening_threads: 16
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:                              Options.statistics: (nil)
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:                               Options.use_fsync: 0
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:                       Options.max_log_file_size: 0
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:                         Options.allow_fallocate: 1
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:                        Options.use_direct_reads: 0
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:          Options.create_missing_column_families: 0
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:                              Options.db_log_dir: 
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:                                 Options.wal_dir: 
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:                   Options.advise_random_on_open: 1
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:                    Options.write_buffer_manager: 0x55bdf97f0b40
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:                            Options.rate_limiter: (nil)
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:                  Options.unordered_write: 0
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:                               Options.row_cache: None
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:                              Options.wal_filter: None
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:             Options.allow_ingest_behind: 0
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:             Options.two_write_queues: 0
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:             Options.manual_wal_flush: 0
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:             Options.wal_compression: 0
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:             Options.atomic_flush: 0
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:                 Options.log_readahead_size: 0
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:             Options.allow_data_in_errors: 0
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:             Options.db_host_id: __hostname__
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:             Options.max_background_jobs: 2
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:             Options.max_background_compactions: -1
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:             Options.max_subcompactions: 1
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:             Options.max_total_wal_size: 0
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:                          Options.max_open_files: -1
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:                          Options.bytes_per_sync: 0
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:       Options.compaction_readahead_size: 0
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:                  Options.max_background_flushes: -1
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb: Compression algorithms supported:
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:         kZSTD supported: 0
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:         kXpressCompression supported: 0
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:         kBZip2Compression supported: 0
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:         kZSTDNotFinalCompression supported: 0
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:         kLZ4Compression supported: 1
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:         kZlibCompression supported: 1
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:         kLZ4HCCompression supported: 1
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:         kSnappyCompression supported: 1
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:           Options.merge_operator: 
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:        Options.compaction_filter: None
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55bdf97e0aa0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55bdf97d91f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:        Options.write_buffer_size: 33554432
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:  Options.max_write_buffer_number: 2
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:          Options.compression: NoCompression
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:             Options.num_levels: 7
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:                           Options.bloom_locality: 0
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:                               Options.ttl: 2592000
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:                       Options.enable_blob_files: false
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:                           Options.min_blob_size: 0
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: cb6c8f8f-b3b4-4901-9b8e-6f9d7b0da908
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764396963888190, "job": 1, "event": "recovery_started", "wal_files": [4]}
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764396963890457, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 819, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 696, "raw_average_value_size": 139, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764396963, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cb6c8f8f-b3b4-4901-9b8e-6f9d7b0da908", "db_session_id": "TJX3Q57MMVQ4ZHTA4ZSA", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764396963890609, "job": 1, "event": "recovery_finished"}
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55bdf9802e00
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb: DB pointer 0x55bdf988c000
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 06:16:03 compute-0 ceph-mon[74293]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.90 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.90 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.13 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.13 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bdf97d91f0#2 capacity: 512.00 MB usage: 0.22 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 29 06:16:03 compute-0 ceph-mon[74293]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 336ec58c-893b-528f-a0c1-6ed1196bc047
Nov 29 06:16:03 compute-0 ceph-mon[74293]: mon.compute-0@-1(???) e0 preinit fsid 336ec58c-893b-528f-a0c1-6ed1196bc047
Nov 29 06:16:03 compute-0 ceph-mon[74293]: mon.compute-0@-1(probing) e0  my rank is now 0 (was -1)
Nov 29 06:16:03 compute-0 ceph-mon[74293]: mon.compute-0@0(probing) e0 win_standalone_election
Nov 29 06:16:03 compute-0 ceph-mon[74293]: paxos.0).electionLogic(0) init, first boot, initializing epoch at 1 
Nov 29 06:16:03 compute-0 ceph-mon[74293]: mon.compute-0@0(electing) e0 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 06:16:03 compute-0 ceph-mon[74293]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 29 06:16:03 compute-0 ceph-mon[74293]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Nov 29 06:16:03 compute-0 ceph-mon[74293]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Nov 29 06:16:03 compute-0 ceph-mon[74293]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Nov 29 06:16:03 compute-0 ceph-mon[74293]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Nov 29 06:16:03 compute-0 ceph-mon[74293]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 29 06:16:03 compute-0 ceph-mon[74293]: mon.compute-0@0(leader) e0 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Nov 29 06:16:03 compute-0 ceph-mon[74293]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Nov 29 06:16:03 compute-0 ceph-mon[74293]: mon.compute-0@0(probing) e1 win_standalone_election
Nov 29 06:16:03 compute-0 ceph-mon[74293]: paxos.0).electionLogic(2) init, last seen epoch 2
Nov 29 06:16:03 compute-0 ceph-mon[74293]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 06:16:03 compute-0 ceph-mon[74293]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 29 06:16:03 compute-0 ceph-mon[74293]: log_channel(cluster) log [DBG] : monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Nov 29 06:16:03 compute-0 ceph-mon[74293]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 06:16:03 compute-0 ceph-mon[74293]: mgrc update_daemon_metadata mon.compute-0 metadata {addrs=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,ceph_version_when_created=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-0,container_image=quay.io/ceph/ceph:v18,cpu=AMD EPYC-Rome Processor,created_at=2025-11-29T06:16:01.992828Z,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-0,kernel_description=#1 SMP PREEMPT_DYNAMIC Thu Nov 20 14:15:03 UTC 2025,kernel_version=5.14.0-642.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864324,os=Linux}
Nov 29 06:16:03 compute-0 ceph-mon[74293]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Nov 29 06:16:03 compute-0 ceph-mon[74293]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Nov 29 06:16:03 compute-0 ceph-mon[74293]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Nov 29 06:16:03 compute-0 ceph-mon[74293]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Nov 29 06:16:03 compute-0 ceph-mon[74293]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 29 06:16:03 compute-0 podman[74294]: 2025-11-29 06:16:03.951407995 +0000 UTC m=+0.071438678 container create cf740ba4d7b8cf8d88bd04710851633750ee858f1328f61cf1ccabb9e2b87222 (image=quay.io/ceph/ceph:v18, name=interesting_poitras, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 06:16:03 compute-0 ceph-mon[74293]: mon.compute-0@0(leader) e1 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout}
Nov 29 06:16:03 compute-0 ceph-mon[74293]: mon.compute-0@0(leader).mds e1 new map
Nov 29 06:16:03 compute-0 ceph-mon[74293]: mon.compute-0@0(leader).mds e1 print_map
                                           e1
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: -1
                                            
                                           No filesystems configured
Nov 29 06:16:03 compute-0 ceph-mon[74293]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Nov 29 06:16:03 compute-0 ceph-mon[74293]: log_channel(cluster) log [DBG] : fsmap 
Nov 29 06:16:03 compute-0 ceph-mon[74293]: mon.compute-0@0(leader).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Nov 29 06:16:03 compute-0 ceph-mon[74293]: mon.compute-0@0(leader).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Nov 29 06:16:03 compute-0 ceph-mon[74293]: mon.compute-0@0(leader).osd e1 e1: 0 total, 0 up, 0 in
Nov 29 06:16:03 compute-0 ceph-mon[74293]: mon.compute-0@0(leader).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Nov 29 06:16:03 compute-0 ceph-mon[74293]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 29 06:16:03 compute-0 ceph-mon[74293]: mkfs 336ec58c-893b-528f-a0c1-6ed1196bc047
Nov 29 06:16:03 compute-0 ceph-mon[74293]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 29 06:16:03 compute-0 ceph-mon[74293]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 29 06:16:03 compute-0 ceph-mon[74293]: mon.compute-0@0(leader).paxosservice(auth 1..1) refresh upgraded, format 0 -> 3
Nov 29 06:16:03 compute-0 ceph-mon[74293]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Nov 29 06:16:03 compute-0 ceph-mon[74293]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Nov 29 06:16:03 compute-0 ceph-mon[74293]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 29 06:16:04 compute-0 systemd[1]: Started libpod-conmon-cf740ba4d7b8cf8d88bd04710851633750ee858f1328f61cf1ccabb9e2b87222.scope.
Nov 29 06:16:04 compute-0 podman[74294]: 2025-11-29 06:16:03.923157583 +0000 UTC m=+0.043188316 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 06:16:04 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:16:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/630348b35a315d3deb39f67e93f0e8926d163c2462dbbd1af67137706198ac9e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 06:16:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/630348b35a315d3deb39f67e93f0e8926d163c2462dbbd1af67137706198ac9e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:16:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/630348b35a315d3deb39f67e93f0e8926d163c2462dbbd1af67137706198ac9e/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 29 06:16:04 compute-0 podman[74294]: 2025-11-29 06:16:04.086102376 +0000 UTC m=+0.206133089 container init cf740ba4d7b8cf8d88bd04710851633750ee858f1328f61cf1ccabb9e2b87222 (image=quay.io/ceph/ceph:v18, name=interesting_poitras, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 29 06:16:04 compute-0 podman[74294]: 2025-11-29 06:16:04.098483707 +0000 UTC m=+0.218514380 container start cf740ba4d7b8cf8d88bd04710851633750ee858f1328f61cf1ccabb9e2b87222 (image=quay.io/ceph/ceph:v18, name=interesting_poitras, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 06:16:04 compute-0 podman[74294]: 2025-11-29 06:16:04.102339737 +0000 UTC m=+0.222370420 container attach cf740ba4d7b8cf8d88bd04710851633750ee858f1328f61cf1ccabb9e2b87222 (image=quay.io/ceph/ceph:v18, name=interesting_poitras, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 29 06:16:04 compute-0 ceph-mon[74293]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Nov 29 06:16:04 compute-0 ceph-mon[74293]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2396279677' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 29 06:16:04 compute-0 interesting_poitras[74349]:   cluster:
Nov 29 06:16:04 compute-0 interesting_poitras[74349]:     id:     336ec58c-893b-528f-a0c1-6ed1196bc047
Nov 29 06:16:04 compute-0 interesting_poitras[74349]:     health: HEALTH_OK
Nov 29 06:16:04 compute-0 interesting_poitras[74349]:  
Nov 29 06:16:04 compute-0 interesting_poitras[74349]:   services:
Nov 29 06:16:04 compute-0 interesting_poitras[74349]:     mon: 1 daemons, quorum compute-0 (age 0.56684s)
Nov 29 06:16:04 compute-0 interesting_poitras[74349]:     mgr: no daemons active
Nov 29 06:16:04 compute-0 interesting_poitras[74349]:     osd: 0 osds: 0 up, 0 in
Nov 29 06:16:04 compute-0 interesting_poitras[74349]:  
Nov 29 06:16:04 compute-0 interesting_poitras[74349]:   data:
Nov 29 06:16:04 compute-0 interesting_poitras[74349]:     pools:   0 pools, 0 pgs
Nov 29 06:16:04 compute-0 interesting_poitras[74349]:     objects: 0 objects, 0 B
Nov 29 06:16:04 compute-0 interesting_poitras[74349]:     usage:   0 B used, 0 B / 0 B avail
Nov 29 06:16:04 compute-0 interesting_poitras[74349]:     pgs:     
Nov 29 06:16:04 compute-0 interesting_poitras[74349]:  
Nov 29 06:16:04 compute-0 systemd[1]: libpod-cf740ba4d7b8cf8d88bd04710851633750ee858f1328f61cf1ccabb9e2b87222.scope: Deactivated successfully.
Nov 29 06:16:04 compute-0 podman[74294]: 2025-11-29 06:16:04.530937896 +0000 UTC m=+0.650968609 container died cf740ba4d7b8cf8d88bd04710851633750ee858f1328f61cf1ccabb9e2b87222 (image=quay.io/ceph/ceph:v18, name=interesting_poitras, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 29 06:16:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-630348b35a315d3deb39f67e93f0e8926d163c2462dbbd1af67137706198ac9e-merged.mount: Deactivated successfully.
Nov 29 06:16:04 compute-0 podman[74294]: 2025-11-29 06:16:04.59449413 +0000 UTC m=+0.714524783 container remove cf740ba4d7b8cf8d88bd04710851633750ee858f1328f61cf1ccabb9e2b87222 (image=quay.io/ceph/ceph:v18, name=interesting_poitras, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 06:16:04 compute-0 systemd[1]: libpod-conmon-cf740ba4d7b8cf8d88bd04710851633750ee858f1328f61cf1ccabb9e2b87222.scope: Deactivated successfully.
Nov 29 06:16:04 compute-0 podman[74387]: 2025-11-29 06:16:04.680477609 +0000 UTC m=+0.057840622 container create 116806a2c2fb6a1295a2d9c64402c2f4207eb587b01360eb73bd798ab510af98 (image=quay.io/ceph/ceph:v18, name=pensive_kilby, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 06:16:04 compute-0 systemd[1]: Started libpod-conmon-116806a2c2fb6a1295a2d9c64402c2f4207eb587b01360eb73bd798ab510af98.scope.
Nov 29 06:16:04 compute-0 podman[74387]: 2025-11-29 06:16:04.652960488 +0000 UTC m=+0.030323541 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 06:16:04 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:16:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/716e8280d49065c454cd5301ac12ee55e8d10d8dca1d571eabe1dd936b709cd1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:16:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/716e8280d49065c454cd5301ac12ee55e8d10d8dca1d571eabe1dd936b709cd1/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 06:16:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/716e8280d49065c454cd5301ac12ee55e8d10d8dca1d571eabe1dd936b709cd1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:16:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/716e8280d49065c454cd5301ac12ee55e8d10d8dca1d571eabe1dd936b709cd1/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 29 06:16:04 compute-0 podman[74387]: 2025-11-29 06:16:04.781580087 +0000 UTC m=+0.158943150 container init 116806a2c2fb6a1295a2d9c64402c2f4207eb587b01360eb73bd798ab510af98 (image=quay.io/ceph/ceph:v18, name=pensive_kilby, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 29 06:16:04 compute-0 podman[74387]: 2025-11-29 06:16:04.793286359 +0000 UTC m=+0.170649342 container start 116806a2c2fb6a1295a2d9c64402c2f4207eb587b01360eb73bd798ab510af98 (image=quay.io/ceph/ceph:v18, name=pensive_kilby, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 06:16:04 compute-0 podman[74387]: 2025-11-29 06:16:04.797456398 +0000 UTC m=+0.174819381 container attach 116806a2c2fb6a1295a2d9c64402c2f4207eb587b01360eb73bd798ab510af98 (image=quay.io/ceph/ceph:v18, name=pensive_kilby, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 06:16:04 compute-0 ceph-mon[74293]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 29 06:16:04 compute-0 ceph-mon[74293]: monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Nov 29 06:16:04 compute-0 ceph-mon[74293]: fsmap 
Nov 29 06:16:04 compute-0 ceph-mon[74293]: osdmap e1: 0 total, 0 up, 0 in
Nov 29 06:16:04 compute-0 ceph-mon[74293]: mgrmap e1: no daemons active
Nov 29 06:16:04 compute-0 ceph-mon[74293]: from='client.? 192.168.122.100:0/2396279677' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 29 06:16:05 compute-0 ceph-mon[74293]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Nov 29 06:16:05 compute-0 ceph-mon[74293]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2327784468' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 29 06:16:05 compute-0 ceph-mon[74293]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2327784468' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Nov 29 06:16:05 compute-0 pensive_kilby[74404]: 
Nov 29 06:16:05 compute-0 pensive_kilby[74404]: [global]
Nov 29 06:16:05 compute-0 pensive_kilby[74404]:         fsid = 336ec58c-893b-528f-a0c1-6ed1196bc047
Nov 29 06:16:05 compute-0 pensive_kilby[74404]:         mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Nov 29 06:16:05 compute-0 systemd[1]: libpod-116806a2c2fb6a1295a2d9c64402c2f4207eb587b01360eb73bd798ab510af98.scope: Deactivated successfully.
Nov 29 06:16:05 compute-0 podman[74387]: 2025-11-29 06:16:05.208494049 +0000 UTC m=+0.585857052 container died 116806a2c2fb6a1295a2d9c64402c2f4207eb587b01360eb73bd798ab510af98 (image=quay.io/ceph/ceph:v18, name=pensive_kilby, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 06:16:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-716e8280d49065c454cd5301ac12ee55e8d10d8dca1d571eabe1dd936b709cd1-merged.mount: Deactivated successfully.
Nov 29 06:16:05 compute-0 podman[74387]: 2025-11-29 06:16:05.250955264 +0000 UTC m=+0.628318237 container remove 116806a2c2fb6a1295a2d9c64402c2f4207eb587b01360eb73bd798ab510af98 (image=quay.io/ceph/ceph:v18, name=pensive_kilby, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 06:16:05 compute-0 systemd[1]: libpod-conmon-116806a2c2fb6a1295a2d9c64402c2f4207eb587b01360eb73bd798ab510af98.scope: Deactivated successfully.
Nov 29 06:16:05 compute-0 podman[74441]: 2025-11-29 06:16:05.347215715 +0000 UTC m=+0.066459327 container create 61720bcd8debe5b1b02fe1da93557881b23eb587dea28f20ffcda325c7bcc9f1 (image=quay.io/ceph/ceph:v18, name=adoring_davinci, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 29 06:16:05 compute-0 systemd[1]: Started libpod-conmon-61720bcd8debe5b1b02fe1da93557881b23eb587dea28f20ffcda325c7bcc9f1.scope.
Nov 29 06:16:05 compute-0 podman[74441]: 2025-11-29 06:16:05.319080116 +0000 UTC m=+0.038323788 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 06:16:05 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:16:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46b8b1e82a7a8ef9d3780bf11fc7218a63a6ca4d1e9a08ba6eaf49f28df01133/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:16:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46b8b1e82a7a8ef9d3780bf11fc7218a63a6ca4d1e9a08ba6eaf49f28df01133/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 06:16:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46b8b1e82a7a8ef9d3780bf11fc7218a63a6ca4d1e9a08ba6eaf49f28df01133/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:16:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46b8b1e82a7a8ef9d3780bf11fc7218a63a6ca4d1e9a08ba6eaf49f28df01133/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 29 06:16:05 compute-0 podman[74441]: 2025-11-29 06:16:05.437579348 +0000 UTC m=+0.156823010 container init 61720bcd8debe5b1b02fe1da93557881b23eb587dea28f20ffcda325c7bcc9f1 (image=quay.io/ceph/ceph:v18, name=adoring_davinci, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 29 06:16:05 compute-0 podman[74441]: 2025-11-29 06:16:05.447197111 +0000 UTC m=+0.166440683 container start 61720bcd8debe5b1b02fe1da93557881b23eb587dea28f20ffcda325c7bcc9f1 (image=quay.io/ceph/ceph:v18, name=adoring_davinci, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 06:16:05 compute-0 podman[74441]: 2025-11-29 06:16:05.451457782 +0000 UTC m=+0.170701384 container attach 61720bcd8debe5b1b02fe1da93557881b23eb587dea28f20ffcda325c7bcc9f1 (image=quay.io/ceph/ceph:v18, name=adoring_davinci, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 06:16:05 compute-0 ceph-mon[74293]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 06:16:05 compute-0 ceph-mon[74293]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3213909719' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:16:05 compute-0 systemd[1]: libpod-61720bcd8debe5b1b02fe1da93557881b23eb587dea28f20ffcda325c7bcc9f1.scope: Deactivated successfully.
Nov 29 06:16:05 compute-0 podman[74484]: 2025-11-29 06:16:05.904949537 +0000 UTC m=+0.024112945 container died 61720bcd8debe5b1b02fe1da93557881b23eb587dea28f20ffcda325c7bcc9f1 (image=quay.io/ceph/ceph:v18, name=adoring_davinci, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507)
Nov 29 06:16:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-46b8b1e82a7a8ef9d3780bf11fc7218a63a6ca4d1e9a08ba6eaf49f28df01133-merged.mount: Deactivated successfully.
Nov 29 06:16:05 compute-0 podman[74484]: 2025-11-29 06:16:05.957124997 +0000 UTC m=+0.076288405 container remove 61720bcd8debe5b1b02fe1da93557881b23eb587dea28f20ffcda325c7bcc9f1 (image=quay.io/ceph/ceph:v18, name=adoring_davinci, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 06:16:05 compute-0 systemd[1]: libpod-conmon-61720bcd8debe5b1b02fe1da93557881b23eb587dea28f20ffcda325c7bcc9f1.scope: Deactivated successfully.
Nov 29 06:16:05 compute-0 ceph-mon[74293]: from='client.? 192.168.122.100:0/2327784468' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 29 06:16:05 compute-0 ceph-mon[74293]: from='client.? 192.168.122.100:0/2327784468' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Nov 29 06:16:05 compute-0 ceph-mon[74293]: from='client.? 192.168.122.100:0/3213909719' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:16:05 compute-0 systemd[1]: Stopping Ceph mon.compute-0 for 336ec58c-893b-528f-a0c1-6ed1196bc047...
Nov 29 06:16:06 compute-0 ceph-mon[74293]: received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Nov 29 06:16:06 compute-0 ceph-mon[74293]: mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Nov 29 06:16:06 compute-0 ceph-mon[74293]: mon.compute-0@0(leader) e1 shutdown
Nov 29 06:16:06 compute-0 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mon-compute-0[74289]: 2025-11-29T06:16:06.235+0000 7f0f5d161640 -1 received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Nov 29 06:16:06 compute-0 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mon-compute-0[74289]: 2025-11-29T06:16:06.235+0000 7f0f5d161640 -1 mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Nov 29 06:16:06 compute-0 ceph-mon[74293]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Nov 29 06:16:06 compute-0 ceph-mon[74293]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Nov 29 06:16:06 compute-0 podman[74528]: 2025-11-29 06:16:06.266660839 +0000 UTC m=+0.083574162 container died 7dad2a0c9576d9ed265ee38fcd17a68df8cb8e5f59cf0de18ae06a6c8fff3d4e (image=quay.io/ceph/ceph:v18, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mon-compute-0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 29 06:16:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-d30c27b6a84d460a1022682dab7ad6135e30f0b4d9feda45deee56876583f7e7-merged.mount: Deactivated successfully.
Nov 29 06:16:06 compute-0 podman[74528]: 2025-11-29 06:16:06.314026163 +0000 UTC m=+0.130939486 container remove 7dad2a0c9576d9ed265ee38fcd17a68df8cb8e5f59cf0de18ae06a6c8fff3d4e (image=quay.io/ceph/ceph:v18, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mon-compute-0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 06:16:06 compute-0 bash[74528]: ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mon-compute-0
Nov 29 06:16:06 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 06:16:06 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 06:16:06 compute-0 systemd[1]: ceph-336ec58c-893b-528f-a0c1-6ed1196bc047@mon.compute-0.service: Deactivated successfully.
Nov 29 06:16:06 compute-0 systemd[1]: Stopped Ceph mon.compute-0 for 336ec58c-893b-528f-a0c1-6ed1196bc047.
Nov 29 06:16:06 compute-0 systemd[1]: ceph-336ec58c-893b-528f-a0c1-6ed1196bc047@mon.compute-0.service: Consumed 1.256s CPU time.
Nov 29 06:16:06 compute-0 systemd[1]: Starting Ceph mon.compute-0 for 336ec58c-893b-528f-a0c1-6ed1196bc047...
Nov 29 06:16:06 compute-0 podman[74634]: 2025-11-29 06:16:06.8493511 +0000 UTC m=+0.070406778 container create c3c8680245c67f710ba1b448e2d4c77c4c02bc368d31276f0332ad942957e3cf (image=quay.io/ceph/ceph:v18, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mon-compute-0, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 06:16:06 compute-0 podman[74634]: 2025-11-29 06:16:06.822005644 +0000 UTC m=+0.043061362 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 06:16:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f8b27703670abcc306e2b54256f9521c5a2e0dc66a9e3ac2658fc7598bf8ffc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:16:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f8b27703670abcc306e2b54256f9521c5a2e0dc66a9e3ac2658fc7598bf8ffc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:16:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f8b27703670abcc306e2b54256f9521c5a2e0dc66a9e3ac2658fc7598bf8ffc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 06:16:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f8b27703670abcc306e2b54256f9521c5a2e0dc66a9e3ac2658fc7598bf8ffc/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 29 06:16:06 compute-0 podman[74634]: 2025-11-29 06:16:06.947839784 +0000 UTC m=+0.168895512 container init c3c8680245c67f710ba1b448e2d4c77c4c02bc368d31276f0332ad942957e3cf (image=quay.io/ceph/ceph:v18, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 29 06:16:06 compute-0 podman[74634]: 2025-11-29 06:16:06.963494418 +0000 UTC m=+0.184550096 container start c3c8680245c67f710ba1b448e2d4c77c4c02bc368d31276f0332ad942957e3cf (image=quay.io/ceph/ceph:v18, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 29 06:16:06 compute-0 bash[74634]: c3c8680245c67f710ba1b448e2d4c77c4c02bc368d31276f0332ad942957e3cf
Nov 29 06:16:06 compute-0 systemd[1]: Started Ceph mon.compute-0 for 336ec58c-893b-528f-a0c1-6ed1196bc047.
Nov 29 06:16:07 compute-0 ceph-mon[74654]: set uid:gid to 167:167 (ceph:ceph)
Nov 29 06:16:07 compute-0 ceph-mon[74654]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mon, pid 2
Nov 29 06:16:07 compute-0 ceph-mon[74654]: pidfile_write: ignore empty --pid-file
Nov 29 06:16:07 compute-0 ceph-mon[74654]: load: jerasure load: lrc 
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb: RocksDB version: 7.9.2
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb: Git sha 0
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb: DB SUMMARY
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb: DB Session ID:  VL4WOW4AK06DDHF5VQBP
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb: CURRENT file:  CURRENT
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb: IDENTITY file:  IDENTITY
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb: MANIFEST file:  MANIFEST-000010 size: 179 Bytes
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 1, files: 000008.sst 
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000009.log size: 55210 ; 
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:                         Options.error_if_exists: 0
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:                       Options.create_if_missing: 0
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:                         Options.paranoid_checks: 1
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:                                     Options.env: 0x55e1a328cc40
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:                                      Options.fs: PosixFileSystem
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:                                Options.info_log: 0x55e1a5839040
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:                Options.max_file_opening_threads: 16
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:                              Options.statistics: (nil)
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:                               Options.use_fsync: 0
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:                       Options.max_log_file_size: 0
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:                         Options.allow_fallocate: 1
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:                        Options.use_direct_reads: 0
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:          Options.create_missing_column_families: 0
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:                              Options.db_log_dir: 
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:                                 Options.wal_dir: 
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:                   Options.advise_random_on_open: 1
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:                    Options.write_buffer_manager: 0x55e1a5848b40
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:                            Options.rate_limiter: (nil)
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:                  Options.unordered_write: 0
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:                               Options.row_cache: None
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:                              Options.wal_filter: None
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:             Options.allow_ingest_behind: 0
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:             Options.two_write_queues: 0
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:             Options.manual_wal_flush: 0
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:             Options.wal_compression: 0
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:             Options.atomic_flush: 0
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:                 Options.log_readahead_size: 0
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:             Options.allow_data_in_errors: 0
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:             Options.db_host_id: __hostname__
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:             Options.max_background_jobs: 2
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:             Options.max_background_compactions: -1
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:             Options.max_subcompactions: 1
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:             Options.max_total_wal_size: 0
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:                          Options.max_open_files: -1
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:                          Options.bytes_per_sync: 0
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:       Options.compaction_readahead_size: 0
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:                  Options.max_background_flushes: -1
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb: Compression algorithms supported:
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:         kZSTD supported: 0
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:         kXpressCompression supported: 0
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:         kBZip2Compression supported: 0
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:         kZSTDNotFinalCompression supported: 0
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:         kLZ4Compression supported: 1
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:         kZlibCompression supported: 1
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:         kLZ4HCCompression supported: 1
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:         kSnappyCompression supported: 1
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:           Options.merge_operator: 
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:        Options.compaction_filter: None
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e1a5838c40)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55e1a58311f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:        Options.write_buffer_size: 33554432
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:  Options.max_write_buffer_number: 2
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:          Options.compression: NoCompression
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:             Options.num_levels: 7
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:                           Options.bloom_locality: 0
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:                               Options.ttl: 2592000
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:                       Options.enable_blob_files: false
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:                           Options.min_blob_size: 0
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: cb6c8f8f-b3b4-4901-9b8e-6f9d7b0da908
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764396967031602, "job": 1, "event": "recovery_started", "wal_files": [9]}
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764396967036250, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 54849, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 136, "table_properties": {"data_size": 53385, "index_size": 170, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 261, "raw_key_size": 2933, "raw_average_key_size": 29, "raw_value_size": 51027, "raw_average_value_size": 515, "num_data_blocks": 9, "num_entries": 99, "num_filter_entries": 99, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764396967, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cb6c8f8f-b3b4-4901-9b8e-6f9d7b0da908", "db_session_id": "VL4WOW4AK06DDHF5VQBP", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}}
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764396967036384, "job": 1, "event": "recovery_finished"}
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb: [db/version_set.cc:5047] Creating manifest 15
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55e1a585ae00
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb: DB pointer 0x55e1a58e4000
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 06:16:07 compute-0 ceph-mon[74654]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0   55.46 KB   0.5      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     12.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      2/0   55.46 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     12.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     12.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     12.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 3.50 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 3.50 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e1a58311f0#2 capacity: 512.00 MB usage: 0.78 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 2.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(2,0.42 KB,8.04663e-05%) IndexBlock(2,0.36 KB,6.85453e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 29 06:16:07 compute-0 ceph-mon[74654]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 336ec58c-893b-528f-a0c1-6ed1196bc047
Nov 29 06:16:07 compute-0 ceph-mon[74654]: mon.compute-0@-1(???) e1 preinit fsid 336ec58c-893b-528f-a0c1-6ed1196bc047
Nov 29 06:16:07 compute-0 ceph-mon[74654]: mon.compute-0@-1(???).mds e1 new map
Nov 29 06:16:07 compute-0 ceph-mon[74654]: mon.compute-0@-1(???).mds e1 print_map
                                           e1
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: -1
                                            
                                           No filesystems configured
Nov 29 06:16:07 compute-0 ceph-mon[74654]: mon.compute-0@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Nov 29 06:16:07 compute-0 ceph-mon[74654]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 29 06:16:07 compute-0 ceph-mon[74654]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 29 06:16:07 compute-0 ceph-mon[74654]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 29 06:16:07 compute-0 ceph-mon[74654]: mon.compute-0@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3
Nov 29 06:16:07 compute-0 ceph-mon[74654]: mon.compute-0@-1(probing) e1  my rank is now 0 (was -1)
Nov 29 06:16:07 compute-0 ceph-mon[74654]: mon.compute-0@0(probing) e1 win_standalone_election
Nov 29 06:16:07 compute-0 ceph-mon[74654]: paxos.0).electionLogic(3) init, last seen epoch 3, mid-election, bumping
Nov 29 06:16:07 compute-0 ceph-mon[74654]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 06:16:07 compute-0 ceph-mon[74654]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 29 06:16:07 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Nov 29 06:16:07 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 06:16:07 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : fsmap 
Nov 29 06:16:07 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Nov 29 06:16:07 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Nov 29 06:16:07 compute-0 podman[74655]: 2025-11-29 06:16:07.070762562 +0000 UTC m=+0.060384175 container create ee677ece805d8a292e28fced46d48e505d1794a40b15220e571403844b4a7f3e (image=quay.io/ceph/ceph:v18, name=interesting_lamarr, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 06:16:07 compute-0 systemd[1]: Started libpod-conmon-ee677ece805d8a292e28fced46d48e505d1794a40b15220e571403844b4a7f3e.scope.
Nov 29 06:16:07 compute-0 ceph-mon[74654]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 29 06:16:07 compute-0 ceph-mon[74654]: monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Nov 29 06:16:07 compute-0 ceph-mon[74654]: fsmap 
Nov 29 06:16:07 compute-0 ceph-mon[74654]: osdmap e1: 0 total, 0 up, 0 in
Nov 29 06:16:07 compute-0 ceph-mon[74654]: mgrmap e1: no daemons active
Nov 29 06:16:07 compute-0 podman[74655]: 2025-11-29 06:16:07.051982279 +0000 UTC m=+0.041603922 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 06:16:07 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:16:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8aa17425c446d8957fc900079d4b79ebb90c3761663ba22fa081eac6f9c54852/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:16:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8aa17425c446d8957fc900079d4b79ebb90c3761663ba22fa081eac6f9c54852/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:16:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8aa17425c446d8957fc900079d4b79ebb90c3761663ba22fa081eac6f9c54852/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 06:16:07 compute-0 podman[74655]: 2025-11-29 06:16:07.178100987 +0000 UTC m=+0.167722630 container init ee677ece805d8a292e28fced46d48e505d1794a40b15220e571403844b4a7f3e (image=quay.io/ceph/ceph:v18, name=interesting_lamarr, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 06:16:07 compute-0 podman[74655]: 2025-11-29 06:16:07.18774241 +0000 UTC m=+0.177364053 container start ee677ece805d8a292e28fced46d48e505d1794a40b15220e571403844b4a7f3e (image=quay.io/ceph/ceph:v18, name=interesting_lamarr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 06:16:07 compute-0 podman[74655]: 2025-11-29 06:16:07.19229297 +0000 UTC m=+0.181914583 container attach ee677ece805d8a292e28fced46d48e505d1794a40b15220e571403844b4a7f3e (image=quay.io/ceph/ceph:v18, name=interesting_lamarr, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 29 06:16:07 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=public_network}] v 0) v1
Nov 29 06:16:07 compute-0 systemd[1]: libpod-ee677ece805d8a292e28fced46d48e505d1794a40b15220e571403844b4a7f3e.scope: Deactivated successfully.
Nov 29 06:16:07 compute-0 podman[74655]: 2025-11-29 06:16:07.580007219 +0000 UTC m=+0.569628912 container died ee677ece805d8a292e28fced46d48e505d1794a40b15220e571403844b4a7f3e (image=quay.io/ceph/ceph:v18, name=interesting_lamarr, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 29 06:16:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-8aa17425c446d8957fc900079d4b79ebb90c3761663ba22fa081eac6f9c54852-merged.mount: Deactivated successfully.
Nov 29 06:16:07 compute-0 podman[74655]: 2025-11-29 06:16:07.634162426 +0000 UTC m=+0.623784029 container remove ee677ece805d8a292e28fced46d48e505d1794a40b15220e571403844b4a7f3e (image=quay.io/ceph/ceph:v18, name=interesting_lamarr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 29 06:16:07 compute-0 systemd[1]: libpod-conmon-ee677ece805d8a292e28fced46d48e505d1794a40b15220e571403844b4a7f3e.scope: Deactivated successfully.
Nov 29 06:16:07 compute-0 podman[74746]: 2025-11-29 06:16:07.697182833 +0000 UTC m=+0.043828524 container create 69adb72f1f72bfd776ffbba6933fb37ee375e2ee21955e734dc4ab77fb394c48 (image=quay.io/ceph/ceph:v18, name=stupefied_feistel, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 06:16:07 compute-0 systemd[1]: Started libpod-conmon-69adb72f1f72bfd776ffbba6933fb37ee375e2ee21955e734dc4ab77fb394c48.scope.
Nov 29 06:16:07 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:16:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cabc489fac5450d73c52d15257aa4eadf8ca5f191cc1b8ff9304b8f07b284e63/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:16:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cabc489fac5450d73c52d15257aa4eadf8ca5f191cc1b8ff9304b8f07b284e63/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:16:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cabc489fac5450d73c52d15257aa4eadf8ca5f191cc1b8ff9304b8f07b284e63/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 06:16:07 compute-0 podman[74746]: 2025-11-29 06:16:07.676579169 +0000 UTC m=+0.023224890 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 06:16:07 compute-0 podman[74746]: 2025-11-29 06:16:07.783473692 +0000 UTC m=+0.130119453 container init 69adb72f1f72bfd776ffbba6933fb37ee375e2ee21955e734dc4ab77fb394c48 (image=quay.io/ceph/ceph:v18, name=stupefied_feistel, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 29 06:16:07 compute-0 podman[74746]: 2025-11-29 06:16:07.794104603 +0000 UTC m=+0.140750314 container start 69adb72f1f72bfd776ffbba6933fb37ee375e2ee21955e734dc4ab77fb394c48 (image=quay.io/ceph/ceph:v18, name=stupefied_feistel, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 29 06:16:07 compute-0 podman[74746]: 2025-11-29 06:16:07.79858065 +0000 UTC m=+0.145226351 container attach 69adb72f1f72bfd776ffbba6933fb37ee375e2ee21955e734dc4ab77fb394c48 (image=quay.io/ceph/ceph:v18, name=stupefied_feistel, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 06:16:08 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=cluster_network}] v 0) v1
Nov 29 06:16:08 compute-0 systemd[1]: libpod-69adb72f1f72bfd776ffbba6933fb37ee375e2ee21955e734dc4ab77fb394c48.scope: Deactivated successfully.
Nov 29 06:16:08 compute-0 podman[74746]: 2025-11-29 06:16:08.196073567 +0000 UTC m=+0.542719278 container died 69adb72f1f72bfd776ffbba6933fb37ee375e2ee21955e734dc4ab77fb394c48 (image=quay.io/ceph/ceph:v18, name=stupefied_feistel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 29 06:16:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-cabc489fac5450d73c52d15257aa4eadf8ca5f191cc1b8ff9304b8f07b284e63-merged.mount: Deactivated successfully.
Nov 29 06:16:08 compute-0 podman[74746]: 2025-11-29 06:16:08.245772507 +0000 UTC m=+0.592418188 container remove 69adb72f1f72bfd776ffbba6933fb37ee375e2ee21955e734dc4ab77fb394c48 (image=quay.io/ceph/ceph:v18, name=stupefied_feistel, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 29 06:16:08 compute-0 systemd[1]: libpod-conmon-69adb72f1f72bfd776ffbba6933fb37ee375e2ee21955e734dc4ab77fb394c48.scope: Deactivated successfully.
Nov 29 06:16:08 compute-0 systemd[1]: Reloading.
Nov 29 06:16:08 compute-0 systemd-rc-local-generator[74826]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 06:16:08 compute-0 systemd-sysv-generator[74830]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 06:16:08 compute-0 systemd[1]: Reloading.
Nov 29 06:16:08 compute-0 systemd-rc-local-generator[74867]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 06:16:08 compute-0 systemd-sysv-generator[74871]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 06:16:08 compute-0 sshd-session[74796]: Invalid user exx from 138.124.186.225 port 37458
Nov 29 06:16:08 compute-0 systemd[1]: Starting Ceph mgr.compute-0.vxabpq for 336ec58c-893b-528f-a0c1-6ed1196bc047...
Nov 29 06:16:08 compute-0 sshd-session[74796]: Received disconnect from 138.124.186.225 port 37458:11: Bye Bye [preauth]
Nov 29 06:16:08 compute-0 sshd-session[74796]: Disconnected from invalid user exx 138.124.186.225 port 37458 [preauth]
Nov 29 06:16:09 compute-0 podman[74929]: 2025-11-29 06:16:09.110387207 +0000 UTC m=+0.052885892 container create 6f81410254a706b9fc390aa00af336410de8290fc59b4feb47f58688b4bdf6ee (image=quay.io/ceph/ceph:v18, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 06:16:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7578b82584efd40e9ae289ece12a96eb84cad8a437183825e491820111ef9aea/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:16:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7578b82584efd40e9ae289ece12a96eb84cad8a437183825e491820111ef9aea/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:16:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7578b82584efd40e9ae289ece12a96eb84cad8a437183825e491820111ef9aea/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 06:16:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7578b82584efd40e9ae289ece12a96eb84cad8a437183825e491820111ef9aea/merged/var/lib/ceph/mgr/ceph-compute-0.vxabpq supports timestamps until 2038 (0x7fffffff)
Nov 29 06:16:09 compute-0 podman[74929]: 2025-11-29 06:16:09.081430345 +0000 UTC m=+0.023929070 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 06:16:09 compute-0 podman[74929]: 2025-11-29 06:16:09.192519467 +0000 UTC m=+0.135018212 container init 6f81410254a706b9fc390aa00af336410de8290fc59b4feb47f58688b4bdf6ee (image=quay.io/ceph/ceph:v18, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 29 06:16:09 compute-0 podman[74929]: 2025-11-29 06:16:09.210743844 +0000 UTC m=+0.153242519 container start 6f81410254a706b9fc390aa00af336410de8290fc59b4feb47f58688b4bdf6ee (image=quay.io/ceph/ceph:v18, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 29 06:16:09 compute-0 bash[74929]: 6f81410254a706b9fc390aa00af336410de8290fc59b4feb47f58688b4bdf6ee
Nov 29 06:16:09 compute-0 systemd[1]: Started Ceph mgr.compute-0.vxabpq for 336ec58c-893b-528f-a0c1-6ed1196bc047.
Nov 29 06:16:09 compute-0 ceph-mgr[74948]: set uid:gid to 167:167 (ceph:ceph)
Nov 29 06:16:09 compute-0 ceph-mgr[74948]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Nov 29 06:16:09 compute-0 ceph-mgr[74948]: pidfile_write: ignore empty --pid-file
Nov 29 06:16:09 compute-0 podman[74949]: 2025-11-29 06:16:09.311548244 +0000 UTC m=+0.055249769 container create fb8233439b31bfe7c5d62ef54d1ad2cd18a62a62521619bab2bd8438d73ddd13 (image=quay.io/ceph/ceph:v18, name=ecstatic_mestorf, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 06:16:09 compute-0 systemd[1]: Started libpod-conmon-fb8233439b31bfe7c5d62ef54d1ad2cd18a62a62521619bab2bd8438d73ddd13.scope.
Nov 29 06:16:09 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:16:09 compute-0 ceph-mgr[74948]: mgr[py] Loading python module 'alerts'
Nov 29 06:16:09 compute-0 podman[74949]: 2025-11-29 06:16:09.293047239 +0000 UTC m=+0.036748754 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 06:16:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/142729a31dc8e45559c7f0d7e5d02053cb8677f39579b6d1fde84a1b0acfb807/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:16:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/142729a31dc8e45559c7f0d7e5d02053cb8677f39579b6d1fde84a1b0acfb807/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:16:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/142729a31dc8e45559c7f0d7e5d02053cb8677f39579b6d1fde84a1b0acfb807/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 06:16:09 compute-0 podman[74949]: 2025-11-29 06:16:09.406521578 +0000 UTC m=+0.150223103 container init fb8233439b31bfe7c5d62ef54d1ad2cd18a62a62521619bab2bd8438d73ddd13 (image=quay.io/ceph/ceph:v18, name=ecstatic_mestorf, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 06:16:09 compute-0 podman[74949]: 2025-11-29 06:16:09.419128436 +0000 UTC m=+0.162829921 container start fb8233439b31bfe7c5d62ef54d1ad2cd18a62a62521619bab2bd8438d73ddd13 (image=quay.io/ceph/ceph:v18, name=ecstatic_mestorf, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 29 06:16:09 compute-0 podman[74949]: 2025-11-29 06:16:09.422945694 +0000 UTC m=+0.166647229 container attach fb8233439b31bfe7c5d62ef54d1ad2cd18a62a62521619bab2bd8438d73ddd13 (image=quay.io/ceph/ceph:v18, name=ecstatic_mestorf, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 29 06:16:09 compute-0 ceph-mgr[74948]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 29 06:16:09 compute-0 ceph-mgr[74948]: mgr[py] Loading python module 'balancer'
Nov 29 06:16:09 compute-0 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: 2025-11-29T06:16:09.673+0000 7fa614c10140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 29 06:16:09 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 29 06:16:09 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/806291629' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 29 06:16:09 compute-0 ecstatic_mestorf[74990]: 
Nov 29 06:16:09 compute-0 ecstatic_mestorf[74990]: {
Nov 29 06:16:09 compute-0 ecstatic_mestorf[74990]:     "fsid": "336ec58c-893b-528f-a0c1-6ed1196bc047",
Nov 29 06:16:09 compute-0 ecstatic_mestorf[74990]:     "health": {
Nov 29 06:16:09 compute-0 ecstatic_mestorf[74990]:         "status": "HEALTH_OK",
Nov 29 06:16:09 compute-0 ecstatic_mestorf[74990]:         "checks": {},
Nov 29 06:16:09 compute-0 ecstatic_mestorf[74990]:         "mutes": []
Nov 29 06:16:09 compute-0 ecstatic_mestorf[74990]:     },
Nov 29 06:16:09 compute-0 ecstatic_mestorf[74990]:     "election_epoch": 5,
Nov 29 06:16:09 compute-0 ecstatic_mestorf[74990]:     "quorum": [
Nov 29 06:16:09 compute-0 ecstatic_mestorf[74990]:         0
Nov 29 06:16:09 compute-0 ecstatic_mestorf[74990]:     ],
Nov 29 06:16:09 compute-0 ecstatic_mestorf[74990]:     "quorum_names": [
Nov 29 06:16:09 compute-0 ecstatic_mestorf[74990]:         "compute-0"
Nov 29 06:16:09 compute-0 ecstatic_mestorf[74990]:     ],
Nov 29 06:16:09 compute-0 ecstatic_mestorf[74990]:     "quorum_age": 2,
Nov 29 06:16:09 compute-0 ecstatic_mestorf[74990]:     "monmap": {
Nov 29 06:16:09 compute-0 ecstatic_mestorf[74990]:         "epoch": 1,
Nov 29 06:16:09 compute-0 ecstatic_mestorf[74990]:         "min_mon_release_name": "reef",
Nov 29 06:16:09 compute-0 ecstatic_mestorf[74990]:         "num_mons": 1
Nov 29 06:16:09 compute-0 ecstatic_mestorf[74990]:     },
Nov 29 06:16:09 compute-0 ecstatic_mestorf[74990]:     "osdmap": {
Nov 29 06:16:09 compute-0 ecstatic_mestorf[74990]:         "epoch": 1,
Nov 29 06:16:09 compute-0 ecstatic_mestorf[74990]:         "num_osds": 0,
Nov 29 06:16:09 compute-0 ecstatic_mestorf[74990]:         "num_up_osds": 0,
Nov 29 06:16:09 compute-0 ecstatic_mestorf[74990]:         "osd_up_since": 0,
Nov 29 06:16:09 compute-0 ecstatic_mestorf[74990]:         "num_in_osds": 0,
Nov 29 06:16:09 compute-0 ecstatic_mestorf[74990]:         "osd_in_since": 0,
Nov 29 06:16:09 compute-0 ecstatic_mestorf[74990]:         "num_remapped_pgs": 0
Nov 29 06:16:09 compute-0 ecstatic_mestorf[74990]:     },
Nov 29 06:16:09 compute-0 ecstatic_mestorf[74990]:     "pgmap": {
Nov 29 06:16:09 compute-0 ecstatic_mestorf[74990]:         "pgs_by_state": [],
Nov 29 06:16:09 compute-0 ecstatic_mestorf[74990]:         "num_pgs": 0,
Nov 29 06:16:09 compute-0 ecstatic_mestorf[74990]:         "num_pools": 0,
Nov 29 06:16:09 compute-0 ecstatic_mestorf[74990]:         "num_objects": 0,
Nov 29 06:16:09 compute-0 ecstatic_mestorf[74990]:         "data_bytes": 0,
Nov 29 06:16:09 compute-0 ecstatic_mestorf[74990]:         "bytes_used": 0,
Nov 29 06:16:09 compute-0 ecstatic_mestorf[74990]:         "bytes_avail": 0,
Nov 29 06:16:09 compute-0 ecstatic_mestorf[74990]:         "bytes_total": 0
Nov 29 06:16:09 compute-0 ecstatic_mestorf[74990]:     },
Nov 29 06:16:09 compute-0 ecstatic_mestorf[74990]:     "fsmap": {
Nov 29 06:16:09 compute-0 ecstatic_mestorf[74990]:         "epoch": 1,
Nov 29 06:16:09 compute-0 ecstatic_mestorf[74990]:         "by_rank": [],
Nov 29 06:16:09 compute-0 ecstatic_mestorf[74990]:         "up:standby": 0
Nov 29 06:16:09 compute-0 ecstatic_mestorf[74990]:     },
Nov 29 06:16:09 compute-0 ecstatic_mestorf[74990]:     "mgrmap": {
Nov 29 06:16:09 compute-0 ecstatic_mestorf[74990]:         "available": false,
Nov 29 06:16:09 compute-0 ecstatic_mestorf[74990]:         "num_standbys": 0,
Nov 29 06:16:09 compute-0 ecstatic_mestorf[74990]:         "modules": [
Nov 29 06:16:09 compute-0 ecstatic_mestorf[74990]:             "iostat",
Nov 29 06:16:09 compute-0 ecstatic_mestorf[74990]:             "nfs",
Nov 29 06:16:09 compute-0 ecstatic_mestorf[74990]:             "restful"
Nov 29 06:16:09 compute-0 ecstatic_mestorf[74990]:         ],
Nov 29 06:16:09 compute-0 ecstatic_mestorf[74990]:         "services": {}
Nov 29 06:16:09 compute-0 ecstatic_mestorf[74990]:     },
Nov 29 06:16:09 compute-0 ecstatic_mestorf[74990]:     "servicemap": {
Nov 29 06:16:09 compute-0 ecstatic_mestorf[74990]:         "epoch": 1,
Nov 29 06:16:09 compute-0 ecstatic_mestorf[74990]:         "modified": "2025-11-29T06:16:03.952029+0000",
Nov 29 06:16:09 compute-0 ecstatic_mestorf[74990]:         "services": {}
Nov 29 06:16:09 compute-0 ecstatic_mestorf[74990]:     },
Nov 29 06:16:09 compute-0 ecstatic_mestorf[74990]:     "progress_events": {}
Nov 29 06:16:09 compute-0 ecstatic_mestorf[74990]: }
Nov 29 06:16:09 compute-0 systemd[1]: libpod-fb8233439b31bfe7c5d62ef54d1ad2cd18a62a62521619bab2bd8438d73ddd13.scope: Deactivated successfully.
Nov 29 06:16:09 compute-0 podman[74949]: 2025-11-29 06:16:09.816218731 +0000 UTC m=+0.559920216 container died fb8233439b31bfe7c5d62ef54d1ad2cd18a62a62521619bab2bd8438d73ddd13 (image=quay.io/ceph/ceph:v18, name=ecstatic_mestorf, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 06:16:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-142729a31dc8e45559c7f0d7e5d02053cb8677f39579b6d1fde84a1b0acfb807-merged.mount: Deactivated successfully.
Nov 29 06:16:09 compute-0 ceph-mon[74654]: from='client.? 192.168.122.100:0/806291629' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 29 06:16:09 compute-0 podman[74949]: 2025-11-29 06:16:09.854456665 +0000 UTC m=+0.598158150 container remove fb8233439b31bfe7c5d62ef54d1ad2cd18a62a62521619bab2bd8438d73ddd13 (image=quay.io/ceph/ceph:v18, name=ecstatic_mestorf, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 29 06:16:09 compute-0 systemd[1]: libpod-conmon-fb8233439b31bfe7c5d62ef54d1ad2cd18a62a62521619bab2bd8438d73ddd13.scope: Deactivated successfully.
Nov 29 06:16:09 compute-0 ceph-mgr[74948]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 29 06:16:09 compute-0 ceph-mgr[74948]: mgr[py] Loading python module 'cephadm'
Nov 29 06:16:09 compute-0 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: 2025-11-29T06:16:09.919+0000 7fa614c10140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 29 06:16:11 compute-0 ceph-mgr[74948]: mgr[py] Loading python module 'crash'
Nov 29 06:16:11 compute-0 podman[75038]: 2025-11-29 06:16:11.956355017 +0000 UTC m=+0.070828640 container create f44c90a3a202d1cecd0309f0635641921cadf47d1c11d0dad7dd1515fd08c49c (image=quay.io/ceph/ceph:v18, name=admiring_hopper, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 29 06:16:12 compute-0 systemd[1]: Started libpod-conmon-f44c90a3a202d1cecd0309f0635641921cadf47d1c11d0dad7dd1515fd08c49c.scope.
Nov 29 06:16:12 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:16:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9124e2202a9748f1891ec18ce1055e0f51d4ddd97733d22e7ae3c1df9ae8c2e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:16:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9124e2202a9748f1891ec18ce1055e0f51d4ddd97733d22e7ae3c1df9ae8c2e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:16:12 compute-0 podman[75038]: 2025-11-29 06:16:11.92542853 +0000 UTC m=+0.039902203 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 06:16:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9124e2202a9748f1891ec18ce1055e0f51d4ddd97733d22e7ae3c1df9ae8c2e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 06:16:12 compute-0 podman[75038]: 2025-11-29 06:16:12.032656002 +0000 UTC m=+0.147129605 container init f44c90a3a202d1cecd0309f0635641921cadf47d1c11d0dad7dd1515fd08c49c (image=quay.io/ceph/ceph:v18, name=admiring_hopper, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 29 06:16:12 compute-0 podman[75038]: 2025-11-29 06:16:12.037832199 +0000 UTC m=+0.152305792 container start f44c90a3a202d1cecd0309f0635641921cadf47d1c11d0dad7dd1515fd08c49c (image=quay.io/ceph/ceph:v18, name=admiring_hopper, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 06:16:12 compute-0 podman[75038]: 2025-11-29 06:16:12.040917246 +0000 UTC m=+0.155390839 container attach f44c90a3a202d1cecd0309f0635641921cadf47d1c11d0dad7dd1515fd08c49c (image=quay.io/ceph/ceph:v18, name=admiring_hopper, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 29 06:16:12 compute-0 ceph-mgr[74948]: mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 29 06:16:12 compute-0 ceph-mgr[74948]: mgr[py] Loading python module 'dashboard'
Nov 29 06:16:12 compute-0 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: 2025-11-29T06:16:12.074+0000 7fa614c10140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 29 06:16:12 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 29 06:16:12 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/295402507' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 29 06:16:12 compute-0 admiring_hopper[75055]: 
Nov 29 06:16:12 compute-0 admiring_hopper[75055]: {
Nov 29 06:16:12 compute-0 admiring_hopper[75055]:     "fsid": "336ec58c-893b-528f-a0c1-6ed1196bc047",
Nov 29 06:16:12 compute-0 admiring_hopper[75055]:     "health": {
Nov 29 06:16:12 compute-0 admiring_hopper[75055]:         "status": "HEALTH_OK",
Nov 29 06:16:12 compute-0 admiring_hopper[75055]:         "checks": {},
Nov 29 06:16:12 compute-0 admiring_hopper[75055]:         "mutes": []
Nov 29 06:16:12 compute-0 admiring_hopper[75055]:     },
Nov 29 06:16:12 compute-0 admiring_hopper[75055]:     "election_epoch": 5,
Nov 29 06:16:12 compute-0 admiring_hopper[75055]:     "quorum": [
Nov 29 06:16:12 compute-0 admiring_hopper[75055]:         0
Nov 29 06:16:12 compute-0 admiring_hopper[75055]:     ],
Nov 29 06:16:12 compute-0 admiring_hopper[75055]:     "quorum_names": [
Nov 29 06:16:12 compute-0 admiring_hopper[75055]:         "compute-0"
Nov 29 06:16:12 compute-0 admiring_hopper[75055]:     ],
Nov 29 06:16:12 compute-0 admiring_hopper[75055]:     "quorum_age": 5,
Nov 29 06:16:12 compute-0 admiring_hopper[75055]:     "monmap": {
Nov 29 06:16:12 compute-0 admiring_hopper[75055]:         "epoch": 1,
Nov 29 06:16:12 compute-0 admiring_hopper[75055]:         "min_mon_release_name": "reef",
Nov 29 06:16:12 compute-0 admiring_hopper[75055]:         "num_mons": 1
Nov 29 06:16:12 compute-0 admiring_hopper[75055]:     },
Nov 29 06:16:12 compute-0 admiring_hopper[75055]:     "osdmap": {
Nov 29 06:16:12 compute-0 admiring_hopper[75055]:         "epoch": 1,
Nov 29 06:16:12 compute-0 admiring_hopper[75055]:         "num_osds": 0,
Nov 29 06:16:12 compute-0 admiring_hopper[75055]:         "num_up_osds": 0,
Nov 29 06:16:12 compute-0 admiring_hopper[75055]:         "osd_up_since": 0,
Nov 29 06:16:12 compute-0 admiring_hopper[75055]:         "num_in_osds": 0,
Nov 29 06:16:12 compute-0 admiring_hopper[75055]:         "osd_in_since": 0,
Nov 29 06:16:12 compute-0 admiring_hopper[75055]:         "num_remapped_pgs": 0
Nov 29 06:16:12 compute-0 admiring_hopper[75055]:     },
Nov 29 06:16:12 compute-0 admiring_hopper[75055]:     "pgmap": {
Nov 29 06:16:12 compute-0 admiring_hopper[75055]:         "pgs_by_state": [],
Nov 29 06:16:12 compute-0 admiring_hopper[75055]:         "num_pgs": 0,
Nov 29 06:16:12 compute-0 admiring_hopper[75055]:         "num_pools": 0,
Nov 29 06:16:12 compute-0 admiring_hopper[75055]:         "num_objects": 0,
Nov 29 06:16:12 compute-0 admiring_hopper[75055]:         "data_bytes": 0,
Nov 29 06:16:12 compute-0 admiring_hopper[75055]:         "bytes_used": 0,
Nov 29 06:16:12 compute-0 admiring_hopper[75055]:         "bytes_avail": 0,
Nov 29 06:16:12 compute-0 admiring_hopper[75055]:         "bytes_total": 0
Nov 29 06:16:12 compute-0 admiring_hopper[75055]:     },
Nov 29 06:16:12 compute-0 admiring_hopper[75055]:     "fsmap": {
Nov 29 06:16:12 compute-0 admiring_hopper[75055]:         "epoch": 1,
Nov 29 06:16:12 compute-0 admiring_hopper[75055]:         "by_rank": [],
Nov 29 06:16:12 compute-0 admiring_hopper[75055]:         "up:standby": 0
Nov 29 06:16:12 compute-0 admiring_hopper[75055]:     },
Nov 29 06:16:12 compute-0 admiring_hopper[75055]:     "mgrmap": {
Nov 29 06:16:12 compute-0 admiring_hopper[75055]:         "available": false,
Nov 29 06:16:12 compute-0 admiring_hopper[75055]:         "num_standbys": 0,
Nov 29 06:16:12 compute-0 admiring_hopper[75055]:         "modules": [
Nov 29 06:16:12 compute-0 admiring_hopper[75055]:             "iostat",
Nov 29 06:16:12 compute-0 admiring_hopper[75055]:             "nfs",
Nov 29 06:16:12 compute-0 admiring_hopper[75055]:             "restful"
Nov 29 06:16:12 compute-0 admiring_hopper[75055]:         ],
Nov 29 06:16:12 compute-0 admiring_hopper[75055]:         "services": {}
Nov 29 06:16:12 compute-0 admiring_hopper[75055]:     },
Nov 29 06:16:12 compute-0 admiring_hopper[75055]:     "servicemap": {
Nov 29 06:16:12 compute-0 admiring_hopper[75055]:         "epoch": 1,
Nov 29 06:16:12 compute-0 admiring_hopper[75055]:         "modified": "2025-11-29T06:16:03.952029+0000",
Nov 29 06:16:12 compute-0 admiring_hopper[75055]:         "services": {}
Nov 29 06:16:12 compute-0 admiring_hopper[75055]:     },
Nov 29 06:16:12 compute-0 admiring_hopper[75055]:     "progress_events": {}
Nov 29 06:16:12 compute-0 admiring_hopper[75055]: }
Nov 29 06:16:12 compute-0 systemd[1]: libpod-f44c90a3a202d1cecd0309f0635641921cadf47d1c11d0dad7dd1515fd08c49c.scope: Deactivated successfully.
Nov 29 06:16:12 compute-0 podman[75038]: 2025-11-29 06:16:12.48552515 +0000 UTC m=+0.599998743 container died f44c90a3a202d1cecd0309f0635641921cadf47d1c11d0dad7dd1515fd08c49c (image=quay.io/ceph/ceph:v18, name=admiring_hopper, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 06:16:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-e9124e2202a9748f1891ec18ce1055e0f51d4ddd97733d22e7ae3c1df9ae8c2e-merged.mount: Deactivated successfully.
Nov 29 06:16:12 compute-0 ceph-mon[74654]: from='client.? 192.168.122.100:0/295402507' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 29 06:16:12 compute-0 podman[75038]: 2025-11-29 06:16:12.541268322 +0000 UTC m=+0.655741925 container remove f44c90a3a202d1cecd0309f0635641921cadf47d1c11d0dad7dd1515fd08c49c (image=quay.io/ceph/ceph:v18, name=admiring_hopper, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 29 06:16:12 compute-0 systemd[1]: libpod-conmon-f44c90a3a202d1cecd0309f0635641921cadf47d1c11d0dad7dd1515fd08c49c.scope: Deactivated successfully.
Nov 29 06:16:13 compute-0 ceph-mgr[74948]: mgr[py] Loading python module 'devicehealth'
Nov 29 06:16:13 compute-0 ceph-mgr[74948]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 29 06:16:13 compute-0 ceph-mgr[74948]: mgr[py] Loading python module 'diskprediction_local'
Nov 29 06:16:13 compute-0 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: 2025-11-29T06:16:13.642+0000 7fa614c10140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 29 06:16:14 compute-0 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Nov 29 06:16:14 compute-0 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Nov 29 06:16:14 compute-0 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]:   from numpy import show_config as show_numpy_config
Nov 29 06:16:14 compute-0 ceph-mgr[74948]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 29 06:16:14 compute-0 ceph-mgr[74948]: mgr[py] Loading python module 'influx'
Nov 29 06:16:14 compute-0 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: 2025-11-29T06:16:14.135+0000 7fa614c10140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 29 06:16:14 compute-0 ceph-mgr[74948]: mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 29 06:16:14 compute-0 ceph-mgr[74948]: mgr[py] Loading python module 'insights'
Nov 29 06:16:14 compute-0 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: 2025-11-29T06:16:14.365+0000 7fa614c10140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 29 06:16:14 compute-0 ceph-mgr[74948]: mgr[py] Loading python module 'iostat'
Nov 29 06:16:14 compute-0 podman[75092]: 2025-11-29 06:16:14.63133696 +0000 UTC m=+0.067486289 container create a77def05d3b542e6813e73fd1fb76cb49c49c816897e2229a04faa28e0c0f0b7 (image=quay.io/ceph/ceph:v18, name=focused_bardeen, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 06:16:14 compute-0 systemd[1]: Started libpod-conmon-a77def05d3b542e6813e73fd1fb76cb49c49c816897e2229a04faa28e0c0f0b7.scope.
Nov 29 06:16:14 compute-0 podman[75092]: 2025-11-29 06:16:14.593847581 +0000 UTC m=+0.029996940 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 06:16:14 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:16:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cddf36f16e757a55ae5f250a90a3607e1fce512b5296d4827124cfdf9aab761/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:16:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cddf36f16e757a55ae5f250a90a3607e1fce512b5296d4827124cfdf9aab761/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 06:16:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cddf36f16e757a55ae5f250a90a3607e1fce512b5296d4827124cfdf9aab761/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:16:14 compute-0 podman[75092]: 2025-11-29 06:16:14.729770781 +0000 UTC m=+0.165920190 container init a77def05d3b542e6813e73fd1fb76cb49c49c816897e2229a04faa28e0c0f0b7 (image=quay.io/ceph/ceph:v18, name=focused_bardeen, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 29 06:16:14 compute-0 podman[75092]: 2025-11-29 06:16:14.739956449 +0000 UTC m=+0.176105828 container start a77def05d3b542e6813e73fd1fb76cb49c49c816897e2229a04faa28e0c0f0b7 (image=quay.io/ceph/ceph:v18, name=focused_bardeen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 29 06:16:14 compute-0 podman[75092]: 2025-11-29 06:16:14.745125115 +0000 UTC m=+0.181274484 container attach a77def05d3b542e6813e73fd1fb76cb49c49c816897e2229a04faa28e0c0f0b7 (image=quay.io/ceph/ceph:v18, name=focused_bardeen, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 06:16:14 compute-0 ceph-mgr[74948]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 29 06:16:14 compute-0 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: 2025-11-29T06:16:14.851+0000 7fa614c10140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 29 06:16:14 compute-0 ceph-mgr[74948]: mgr[py] Loading python module 'k8sevents'
Nov 29 06:16:15 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 29 06:16:15 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3107029855' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 29 06:16:15 compute-0 focused_bardeen[75108]: 
Nov 29 06:16:15 compute-0 focused_bardeen[75108]: {
Nov 29 06:16:15 compute-0 focused_bardeen[75108]:     "fsid": "336ec58c-893b-528f-a0c1-6ed1196bc047",
Nov 29 06:16:15 compute-0 focused_bardeen[75108]:     "health": {
Nov 29 06:16:15 compute-0 focused_bardeen[75108]:         "status": "HEALTH_OK",
Nov 29 06:16:15 compute-0 focused_bardeen[75108]:         "checks": {},
Nov 29 06:16:15 compute-0 focused_bardeen[75108]:         "mutes": []
Nov 29 06:16:15 compute-0 focused_bardeen[75108]:     },
Nov 29 06:16:15 compute-0 focused_bardeen[75108]:     "election_epoch": 5,
Nov 29 06:16:15 compute-0 focused_bardeen[75108]:     "quorum": [
Nov 29 06:16:15 compute-0 focused_bardeen[75108]:         0
Nov 29 06:16:15 compute-0 focused_bardeen[75108]:     ],
Nov 29 06:16:15 compute-0 focused_bardeen[75108]:     "quorum_names": [
Nov 29 06:16:15 compute-0 focused_bardeen[75108]:         "compute-0"
Nov 29 06:16:15 compute-0 focused_bardeen[75108]:     ],
Nov 29 06:16:15 compute-0 focused_bardeen[75108]:     "quorum_age": 8,
Nov 29 06:16:15 compute-0 focused_bardeen[75108]:     "monmap": {
Nov 29 06:16:15 compute-0 focused_bardeen[75108]:         "epoch": 1,
Nov 29 06:16:15 compute-0 focused_bardeen[75108]:         "min_mon_release_name": "reef",
Nov 29 06:16:15 compute-0 focused_bardeen[75108]:         "num_mons": 1
Nov 29 06:16:15 compute-0 focused_bardeen[75108]:     },
Nov 29 06:16:15 compute-0 focused_bardeen[75108]:     "osdmap": {
Nov 29 06:16:15 compute-0 focused_bardeen[75108]:         "epoch": 1,
Nov 29 06:16:15 compute-0 focused_bardeen[75108]:         "num_osds": 0,
Nov 29 06:16:15 compute-0 focused_bardeen[75108]:         "num_up_osds": 0,
Nov 29 06:16:15 compute-0 focused_bardeen[75108]:         "osd_up_since": 0,
Nov 29 06:16:15 compute-0 focused_bardeen[75108]:         "num_in_osds": 0,
Nov 29 06:16:15 compute-0 focused_bardeen[75108]:         "osd_in_since": 0,
Nov 29 06:16:15 compute-0 focused_bardeen[75108]:         "num_remapped_pgs": 0
Nov 29 06:16:15 compute-0 focused_bardeen[75108]:     },
Nov 29 06:16:15 compute-0 focused_bardeen[75108]:     "pgmap": {
Nov 29 06:16:15 compute-0 focused_bardeen[75108]:         "pgs_by_state": [],
Nov 29 06:16:15 compute-0 focused_bardeen[75108]:         "num_pgs": 0,
Nov 29 06:16:15 compute-0 focused_bardeen[75108]:         "num_pools": 0,
Nov 29 06:16:15 compute-0 focused_bardeen[75108]:         "num_objects": 0,
Nov 29 06:16:15 compute-0 focused_bardeen[75108]:         "data_bytes": 0,
Nov 29 06:16:15 compute-0 focused_bardeen[75108]:         "bytes_used": 0,
Nov 29 06:16:15 compute-0 focused_bardeen[75108]:         "bytes_avail": 0,
Nov 29 06:16:15 compute-0 focused_bardeen[75108]:         "bytes_total": 0
Nov 29 06:16:15 compute-0 focused_bardeen[75108]:     },
Nov 29 06:16:15 compute-0 focused_bardeen[75108]:     "fsmap": {
Nov 29 06:16:15 compute-0 focused_bardeen[75108]:         "epoch": 1,
Nov 29 06:16:15 compute-0 focused_bardeen[75108]:         "by_rank": [],
Nov 29 06:16:15 compute-0 focused_bardeen[75108]:         "up:standby": 0
Nov 29 06:16:15 compute-0 focused_bardeen[75108]:     },
Nov 29 06:16:15 compute-0 focused_bardeen[75108]:     "mgrmap": {
Nov 29 06:16:15 compute-0 focused_bardeen[75108]:         "available": false,
Nov 29 06:16:15 compute-0 focused_bardeen[75108]:         "num_standbys": 0,
Nov 29 06:16:15 compute-0 focused_bardeen[75108]:         "modules": [
Nov 29 06:16:15 compute-0 focused_bardeen[75108]:             "iostat",
Nov 29 06:16:15 compute-0 focused_bardeen[75108]:             "nfs",
Nov 29 06:16:15 compute-0 focused_bardeen[75108]:             "restful"
Nov 29 06:16:15 compute-0 focused_bardeen[75108]:         ],
Nov 29 06:16:15 compute-0 focused_bardeen[75108]:         "services": {}
Nov 29 06:16:15 compute-0 focused_bardeen[75108]:     },
Nov 29 06:16:15 compute-0 focused_bardeen[75108]:     "servicemap": {
Nov 29 06:16:15 compute-0 focused_bardeen[75108]:         "epoch": 1,
Nov 29 06:16:15 compute-0 focused_bardeen[75108]:         "modified": "2025-11-29T06:16:03.952029+0000",
Nov 29 06:16:15 compute-0 focused_bardeen[75108]:         "services": {}
Nov 29 06:16:15 compute-0 focused_bardeen[75108]:     },
Nov 29 06:16:15 compute-0 focused_bardeen[75108]:     "progress_events": {}
Nov 29 06:16:15 compute-0 focused_bardeen[75108]: }
Nov 29 06:16:15 compute-0 systemd[1]: libpod-a77def05d3b542e6813e73fd1fb76cb49c49c816897e2229a04faa28e0c0f0b7.scope: Deactivated successfully.
Nov 29 06:16:15 compute-0 podman[75092]: 2025-11-29 06:16:15.155693215 +0000 UTC m=+0.591842584 container died a77def05d3b542e6813e73fd1fb76cb49c49c816897e2229a04faa28e0c0f0b7 (image=quay.io/ceph/ceph:v18, name=focused_bardeen, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 06:16:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-3cddf36f16e757a55ae5f250a90a3607e1fce512b5296d4827124cfdf9aab761-merged.mount: Deactivated successfully.
Nov 29 06:16:15 compute-0 ceph-mon[74654]: from='client.? 192.168.122.100:0/3107029855' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 29 06:16:15 compute-0 podman[75092]: 2025-11-29 06:16:15.214477266 +0000 UTC m=+0.650626635 container remove a77def05d3b542e6813e73fd1fb76cb49c49c816897e2229a04faa28e0c0f0b7 (image=quay.io/ceph/ceph:v18, name=focused_bardeen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 06:16:15 compute-0 systemd[1]: libpod-conmon-a77def05d3b542e6813e73fd1fb76cb49c49c816897e2229a04faa28e0c0f0b7.scope: Deactivated successfully.
Nov 29 06:16:16 compute-0 ceph-mgr[74948]: mgr[py] Loading python module 'localpool'
Nov 29 06:16:16 compute-0 ceph-mgr[74948]: mgr[py] Loading python module 'mds_autoscaler'
Nov 29 06:16:17 compute-0 podman[75146]: 2025-11-29 06:16:17.330523632 +0000 UTC m=+0.077904262 container create cb8e5addde5e5cd9bf02fc355bb83ea57edf4951d2717739ecb5c21f9d24497a (image=quay.io/ceph/ceph:v18, name=elastic_brahmagupta, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 29 06:16:17 compute-0 systemd[1]: Started libpod-conmon-cb8e5addde5e5cd9bf02fc355bb83ea57edf4951d2717739ecb5c21f9d24497a.scope.
Nov 29 06:16:17 compute-0 podman[75146]: 2025-11-29 06:16:17.299833425 +0000 UTC m=+0.047214095 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 06:16:17 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:16:17 compute-0 ceph-mgr[74948]: mgr[py] Loading python module 'mirroring'
Nov 29 06:16:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/584160f81864a3434bad69d42828dd79ebb6a402dc26fa6137d5a1261b1e6d9b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 06:16:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/584160f81864a3434bad69d42828dd79ebb6a402dc26fa6137d5a1261b1e6d9b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:16:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/584160f81864a3434bad69d42828dd79ebb6a402dc26fa6137d5a1261b1e6d9b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:16:17 compute-0 podman[75146]: 2025-11-29 06:16:17.433237944 +0000 UTC m=+0.180618594 container init cb8e5addde5e5cd9bf02fc355bb83ea57edf4951d2717739ecb5c21f9d24497a (image=quay.io/ceph/ceph:v18, name=elastic_brahmagupta, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 29 06:16:17 compute-0 podman[75146]: 2025-11-29 06:16:17.442098275 +0000 UTC m=+0.189478915 container start cb8e5addde5e5cd9bf02fc355bb83ea57edf4951d2717739ecb5c21f9d24497a (image=quay.io/ceph/ceph:v18, name=elastic_brahmagupta, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 06:16:17 compute-0 podman[75146]: 2025-11-29 06:16:17.446261882 +0000 UTC m=+0.193642522 container attach cb8e5addde5e5cd9bf02fc355bb83ea57edf4951d2717739ecb5c21f9d24497a (image=quay.io/ceph/ceph:v18, name=elastic_brahmagupta, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True)
Nov 29 06:16:17 compute-0 ceph-mgr[74948]: mgr[py] Loading python module 'nfs'
Nov 29 06:16:17 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 29 06:16:17 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/524573884' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 29 06:16:17 compute-0 elastic_brahmagupta[75162]: 
Nov 29 06:16:17 compute-0 elastic_brahmagupta[75162]: {
Nov 29 06:16:17 compute-0 elastic_brahmagupta[75162]:     "fsid": "336ec58c-893b-528f-a0c1-6ed1196bc047",
Nov 29 06:16:17 compute-0 elastic_brahmagupta[75162]:     "health": {
Nov 29 06:16:17 compute-0 elastic_brahmagupta[75162]:         "status": "HEALTH_OK",
Nov 29 06:16:17 compute-0 elastic_brahmagupta[75162]:         "checks": {},
Nov 29 06:16:17 compute-0 elastic_brahmagupta[75162]:         "mutes": []
Nov 29 06:16:17 compute-0 elastic_brahmagupta[75162]:     },
Nov 29 06:16:17 compute-0 elastic_brahmagupta[75162]:     "election_epoch": 5,
Nov 29 06:16:17 compute-0 elastic_brahmagupta[75162]:     "quorum": [
Nov 29 06:16:17 compute-0 elastic_brahmagupta[75162]:         0
Nov 29 06:16:17 compute-0 elastic_brahmagupta[75162]:     ],
Nov 29 06:16:17 compute-0 elastic_brahmagupta[75162]:     "quorum_names": [
Nov 29 06:16:17 compute-0 elastic_brahmagupta[75162]:         "compute-0"
Nov 29 06:16:17 compute-0 elastic_brahmagupta[75162]:     ],
Nov 29 06:16:17 compute-0 elastic_brahmagupta[75162]:     "quorum_age": 10,
Nov 29 06:16:17 compute-0 elastic_brahmagupta[75162]:     "monmap": {
Nov 29 06:16:17 compute-0 elastic_brahmagupta[75162]:         "epoch": 1,
Nov 29 06:16:17 compute-0 elastic_brahmagupta[75162]:         "min_mon_release_name": "reef",
Nov 29 06:16:17 compute-0 elastic_brahmagupta[75162]:         "num_mons": 1
Nov 29 06:16:17 compute-0 elastic_brahmagupta[75162]:     },
Nov 29 06:16:17 compute-0 elastic_brahmagupta[75162]:     "osdmap": {
Nov 29 06:16:17 compute-0 elastic_brahmagupta[75162]:         "epoch": 1,
Nov 29 06:16:17 compute-0 elastic_brahmagupta[75162]:         "num_osds": 0,
Nov 29 06:16:17 compute-0 elastic_brahmagupta[75162]:         "num_up_osds": 0,
Nov 29 06:16:17 compute-0 elastic_brahmagupta[75162]:         "osd_up_since": 0,
Nov 29 06:16:17 compute-0 elastic_brahmagupta[75162]:         "num_in_osds": 0,
Nov 29 06:16:17 compute-0 elastic_brahmagupta[75162]:         "osd_in_since": 0,
Nov 29 06:16:17 compute-0 elastic_brahmagupta[75162]:         "num_remapped_pgs": 0
Nov 29 06:16:17 compute-0 elastic_brahmagupta[75162]:     },
Nov 29 06:16:17 compute-0 elastic_brahmagupta[75162]:     "pgmap": {
Nov 29 06:16:17 compute-0 elastic_brahmagupta[75162]:         "pgs_by_state": [],
Nov 29 06:16:17 compute-0 elastic_brahmagupta[75162]:         "num_pgs": 0,
Nov 29 06:16:17 compute-0 elastic_brahmagupta[75162]:         "num_pools": 0,
Nov 29 06:16:17 compute-0 elastic_brahmagupta[75162]:         "num_objects": 0,
Nov 29 06:16:17 compute-0 elastic_brahmagupta[75162]:         "data_bytes": 0,
Nov 29 06:16:17 compute-0 elastic_brahmagupta[75162]:         "bytes_used": 0,
Nov 29 06:16:17 compute-0 elastic_brahmagupta[75162]:         "bytes_avail": 0,
Nov 29 06:16:17 compute-0 elastic_brahmagupta[75162]:         "bytes_total": 0
Nov 29 06:16:17 compute-0 elastic_brahmagupta[75162]:     },
Nov 29 06:16:17 compute-0 elastic_brahmagupta[75162]:     "fsmap": {
Nov 29 06:16:17 compute-0 elastic_brahmagupta[75162]:         "epoch": 1,
Nov 29 06:16:17 compute-0 elastic_brahmagupta[75162]:         "by_rank": [],
Nov 29 06:16:17 compute-0 elastic_brahmagupta[75162]:         "up:standby": 0
Nov 29 06:16:17 compute-0 elastic_brahmagupta[75162]:     },
Nov 29 06:16:17 compute-0 elastic_brahmagupta[75162]:     "mgrmap": {
Nov 29 06:16:17 compute-0 elastic_brahmagupta[75162]:         "available": false,
Nov 29 06:16:17 compute-0 elastic_brahmagupta[75162]:         "num_standbys": 0,
Nov 29 06:16:17 compute-0 elastic_brahmagupta[75162]:         "modules": [
Nov 29 06:16:17 compute-0 elastic_brahmagupta[75162]:             "iostat",
Nov 29 06:16:17 compute-0 elastic_brahmagupta[75162]:             "nfs",
Nov 29 06:16:17 compute-0 elastic_brahmagupta[75162]:             "restful"
Nov 29 06:16:17 compute-0 elastic_brahmagupta[75162]:         ],
Nov 29 06:16:17 compute-0 elastic_brahmagupta[75162]:         "services": {}
Nov 29 06:16:17 compute-0 elastic_brahmagupta[75162]:     },
Nov 29 06:16:17 compute-0 elastic_brahmagupta[75162]:     "servicemap": {
Nov 29 06:16:17 compute-0 elastic_brahmagupta[75162]:         "epoch": 1,
Nov 29 06:16:17 compute-0 elastic_brahmagupta[75162]:         "modified": "2025-11-29T06:16:03.952029+0000",
Nov 29 06:16:17 compute-0 elastic_brahmagupta[75162]:         "services": {}
Nov 29 06:16:17 compute-0 elastic_brahmagupta[75162]:     },
Nov 29 06:16:17 compute-0 elastic_brahmagupta[75162]:     "progress_events": {}
Nov 29 06:16:17 compute-0 elastic_brahmagupta[75162]: }
Nov 29 06:16:17 compute-0 systemd[1]: libpod-cb8e5addde5e5cd9bf02fc355bb83ea57edf4951d2717739ecb5c21f9d24497a.scope: Deactivated successfully.
Nov 29 06:16:17 compute-0 podman[75146]: 2025-11-29 06:16:17.85141829 +0000 UTC m=+0.598798950 container died cb8e5addde5e5cd9bf02fc355bb83ea57edf4951d2717739ecb5c21f9d24497a (image=quay.io/ceph/ceph:v18, name=elastic_brahmagupta, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507)
Nov 29 06:16:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-584160f81864a3434bad69d42828dd79ebb6a402dc26fa6137d5a1261b1e6d9b-merged.mount: Deactivated successfully.
Nov 29 06:16:17 compute-0 ceph-mon[74654]: from='client.? 192.168.122.100:0/524573884' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 29 06:16:17 compute-0 podman[75146]: 2025-11-29 06:16:17.915625604 +0000 UTC m=+0.663006244 container remove cb8e5addde5e5cd9bf02fc355bb83ea57edf4951d2717739ecb5c21f9d24497a (image=quay.io/ceph/ceph:v18, name=elastic_brahmagupta, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 06:16:17 compute-0 systemd[1]: libpod-conmon-cb8e5addde5e5cd9bf02fc355bb83ea57edf4951d2717739ecb5c21f9d24497a.scope: Deactivated successfully.
Nov 29 06:16:18 compute-0 ceph-mgr[74948]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 29 06:16:18 compute-0 ceph-mgr[74948]: mgr[py] Loading python module 'orchestrator'
Nov 29 06:16:18 compute-0 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: 2025-11-29T06:16:18.355+0000 7fa614c10140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 29 06:16:19 compute-0 ceph-mgr[74948]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 29 06:16:19 compute-0 ceph-mgr[74948]: mgr[py] Loading python module 'osd_perf_query'
Nov 29 06:16:19 compute-0 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: 2025-11-29T06:16:19.039+0000 7fa614c10140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 29 06:16:19 compute-0 ceph-mgr[74948]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 29 06:16:19 compute-0 ceph-mgr[74948]: mgr[py] Loading python module 'osd_support'
Nov 29 06:16:19 compute-0 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: 2025-11-29T06:16:19.351+0000 7fa614c10140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 29 06:16:19 compute-0 ceph-mgr[74948]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 29 06:16:19 compute-0 ceph-mgr[74948]: mgr[py] Loading python module 'pg_autoscaler'
Nov 29 06:16:19 compute-0 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: 2025-11-29T06:16:19.574+0000 7fa614c10140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 29 06:16:19 compute-0 ceph-mgr[74948]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 29 06:16:19 compute-0 ceph-mgr[74948]: mgr[py] Loading python module 'progress'
Nov 29 06:16:19 compute-0 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: 2025-11-29T06:16:19.843+0000 7fa614c10140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 29 06:16:19 compute-0 podman[75202]: 2025-11-29 06:16:19.99728857 +0000 UTC m=+0.051583889 container create 3594411ca6ee9500dfcfa8041cf09c3451d15386ff6d3ec27e7da6c9b9d7323d (image=quay.io/ceph/ceph:v18, name=infallible_mccarthy, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 29 06:16:20 compute-0 systemd[1]: Started libpod-conmon-3594411ca6ee9500dfcfa8041cf09c3451d15386ff6d3ec27e7da6c9b9d7323d.scope.
Nov 29 06:16:20 compute-0 sshd-session[75200]: Received disconnect from 31.6.212.12 port 37544:11: Bye Bye [preauth]
Nov 29 06:16:20 compute-0 sshd-session[75200]: Disconnected from authenticating user root 31.6.212.12 port 37544 [preauth]
Nov 29 06:16:20 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:16:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d9309783e64de5f1de97fcf2bfd9652f7324a91bfad36997a17516bdc371ab3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:16:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d9309783e64de5f1de97fcf2bfd9652f7324a91bfad36997a17516bdc371ab3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:16:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d9309783e64de5f1de97fcf2bfd9652f7324a91bfad36997a17516bdc371ab3/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 06:16:20 compute-0 podman[75202]: 2025-11-29 06:16:19.981398251 +0000 UTC m=+0.035693590 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 06:16:20 compute-0 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: 2025-11-29T06:16:20.097+0000 7fa614c10140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 29 06:16:20 compute-0 ceph-mgr[74948]: mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 29 06:16:20 compute-0 ceph-mgr[74948]: mgr[py] Loading python module 'prometheus'
Nov 29 06:16:20 compute-0 podman[75202]: 2025-11-29 06:16:20.101017 +0000 UTC m=+0.155312409 container init 3594411ca6ee9500dfcfa8041cf09c3451d15386ff6d3ec27e7da6c9b9d7323d (image=quay.io/ceph/ceph:v18, name=infallible_mccarthy, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 06:16:20 compute-0 podman[75202]: 2025-11-29 06:16:20.110410346 +0000 UTC m=+0.164705675 container start 3594411ca6ee9500dfcfa8041cf09c3451d15386ff6d3ec27e7da6c9b9d7323d (image=quay.io/ceph/ceph:v18, name=infallible_mccarthy, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 06:16:20 compute-0 podman[75202]: 2025-11-29 06:16:20.11408527 +0000 UTC m=+0.168380679 container attach 3594411ca6ee9500dfcfa8041cf09c3451d15386ff6d3ec27e7da6c9b9d7323d (image=quay.io/ceph/ceph:v18, name=infallible_mccarthy, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 06:16:20 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 29 06:16:20 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/362093507' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 29 06:16:20 compute-0 infallible_mccarthy[75217]: 
Nov 29 06:16:20 compute-0 infallible_mccarthy[75217]: {
Nov 29 06:16:20 compute-0 infallible_mccarthy[75217]:     "fsid": "336ec58c-893b-528f-a0c1-6ed1196bc047",
Nov 29 06:16:20 compute-0 infallible_mccarthy[75217]:     "health": {
Nov 29 06:16:20 compute-0 infallible_mccarthy[75217]:         "status": "HEALTH_OK",
Nov 29 06:16:20 compute-0 infallible_mccarthy[75217]:         "checks": {},
Nov 29 06:16:20 compute-0 infallible_mccarthy[75217]:         "mutes": []
Nov 29 06:16:20 compute-0 infallible_mccarthy[75217]:     },
Nov 29 06:16:20 compute-0 infallible_mccarthy[75217]:     "election_epoch": 5,
Nov 29 06:16:20 compute-0 infallible_mccarthy[75217]:     "quorum": [
Nov 29 06:16:20 compute-0 infallible_mccarthy[75217]:         0
Nov 29 06:16:20 compute-0 infallible_mccarthy[75217]:     ],
Nov 29 06:16:20 compute-0 infallible_mccarthy[75217]:     "quorum_names": [
Nov 29 06:16:20 compute-0 infallible_mccarthy[75217]:         "compute-0"
Nov 29 06:16:20 compute-0 infallible_mccarthy[75217]:     ],
Nov 29 06:16:20 compute-0 infallible_mccarthy[75217]:     "quorum_age": 13,
Nov 29 06:16:20 compute-0 infallible_mccarthy[75217]:     "monmap": {
Nov 29 06:16:20 compute-0 infallible_mccarthy[75217]:         "epoch": 1,
Nov 29 06:16:20 compute-0 infallible_mccarthy[75217]:         "min_mon_release_name": "reef",
Nov 29 06:16:20 compute-0 infallible_mccarthy[75217]:         "num_mons": 1
Nov 29 06:16:20 compute-0 infallible_mccarthy[75217]:     },
Nov 29 06:16:20 compute-0 infallible_mccarthy[75217]:     "osdmap": {
Nov 29 06:16:20 compute-0 infallible_mccarthy[75217]:         "epoch": 1,
Nov 29 06:16:20 compute-0 infallible_mccarthy[75217]:         "num_osds": 0,
Nov 29 06:16:20 compute-0 infallible_mccarthy[75217]:         "num_up_osds": 0,
Nov 29 06:16:20 compute-0 infallible_mccarthy[75217]:         "osd_up_since": 0,
Nov 29 06:16:20 compute-0 infallible_mccarthy[75217]:         "num_in_osds": 0,
Nov 29 06:16:20 compute-0 infallible_mccarthy[75217]:         "osd_in_since": 0,
Nov 29 06:16:20 compute-0 infallible_mccarthy[75217]:         "num_remapped_pgs": 0
Nov 29 06:16:20 compute-0 infallible_mccarthy[75217]:     },
Nov 29 06:16:20 compute-0 infallible_mccarthy[75217]:     "pgmap": {
Nov 29 06:16:20 compute-0 infallible_mccarthy[75217]:         "pgs_by_state": [],
Nov 29 06:16:20 compute-0 infallible_mccarthy[75217]:         "num_pgs": 0,
Nov 29 06:16:20 compute-0 infallible_mccarthy[75217]:         "num_pools": 0,
Nov 29 06:16:20 compute-0 infallible_mccarthy[75217]:         "num_objects": 0,
Nov 29 06:16:20 compute-0 infallible_mccarthy[75217]:         "data_bytes": 0,
Nov 29 06:16:20 compute-0 infallible_mccarthy[75217]:         "bytes_used": 0,
Nov 29 06:16:20 compute-0 infallible_mccarthy[75217]:         "bytes_avail": 0,
Nov 29 06:16:20 compute-0 infallible_mccarthy[75217]:         "bytes_total": 0
Nov 29 06:16:20 compute-0 infallible_mccarthy[75217]:     },
Nov 29 06:16:20 compute-0 infallible_mccarthy[75217]:     "fsmap": {
Nov 29 06:16:20 compute-0 infallible_mccarthy[75217]:         "epoch": 1,
Nov 29 06:16:20 compute-0 infallible_mccarthy[75217]:         "by_rank": [],
Nov 29 06:16:20 compute-0 infallible_mccarthy[75217]:         "up:standby": 0
Nov 29 06:16:20 compute-0 infallible_mccarthy[75217]:     },
Nov 29 06:16:20 compute-0 infallible_mccarthy[75217]:     "mgrmap": {
Nov 29 06:16:20 compute-0 infallible_mccarthy[75217]:         "available": false,
Nov 29 06:16:20 compute-0 infallible_mccarthy[75217]:         "num_standbys": 0,
Nov 29 06:16:20 compute-0 infallible_mccarthy[75217]:         "modules": [
Nov 29 06:16:20 compute-0 infallible_mccarthy[75217]:             "iostat",
Nov 29 06:16:20 compute-0 infallible_mccarthy[75217]:             "nfs",
Nov 29 06:16:20 compute-0 infallible_mccarthy[75217]:             "restful"
Nov 29 06:16:20 compute-0 infallible_mccarthy[75217]:         ],
Nov 29 06:16:20 compute-0 infallible_mccarthy[75217]:         "services": {}
Nov 29 06:16:20 compute-0 infallible_mccarthy[75217]:     },
Nov 29 06:16:20 compute-0 infallible_mccarthy[75217]:     "servicemap": {
Nov 29 06:16:20 compute-0 infallible_mccarthy[75217]:         "epoch": 1,
Nov 29 06:16:20 compute-0 infallible_mccarthy[75217]:         "modified": "2025-11-29T06:16:03.952029+0000",
Nov 29 06:16:20 compute-0 infallible_mccarthy[75217]:         "services": {}
Nov 29 06:16:20 compute-0 infallible_mccarthy[75217]:     },
Nov 29 06:16:20 compute-0 infallible_mccarthy[75217]:     "progress_events": {}
Nov 29 06:16:20 compute-0 infallible_mccarthy[75217]: }
Nov 29 06:16:20 compute-0 systemd[1]: libpod-3594411ca6ee9500dfcfa8041cf09c3451d15386ff6d3ec27e7da6c9b9d7323d.scope: Deactivated successfully.
Nov 29 06:16:20 compute-0 podman[75202]: 2025-11-29 06:16:20.512467465 +0000 UTC m=+0.566762824 container died 3594411ca6ee9500dfcfa8041cf09c3451d15386ff6d3ec27e7da6c9b9d7323d (image=quay.io/ceph/ceph:v18, name=infallible_mccarthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 06:16:20 compute-0 ceph-mon[74654]: from='client.? 192.168.122.100:0/362093507' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 29 06:16:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-1d9309783e64de5f1de97fcf2bfd9652f7324a91bfad36997a17516bdc371ab3-merged.mount: Deactivated successfully.
Nov 29 06:16:20 compute-0 podman[75202]: 2025-11-29 06:16:20.569230728 +0000 UTC m=+0.623526047 container remove 3594411ca6ee9500dfcfa8041cf09c3451d15386ff6d3ec27e7da6c9b9d7323d (image=quay.io/ceph/ceph:v18, name=infallible_mccarthy, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 06:16:20 compute-0 systemd[1]: libpod-conmon-3594411ca6ee9500dfcfa8041cf09c3451d15386ff6d3ec27e7da6c9b9d7323d.scope: Deactivated successfully.
Nov 29 06:16:21 compute-0 ceph-mgr[74948]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 29 06:16:21 compute-0 ceph-mgr[74948]: mgr[py] Loading python module 'rbd_support'
Nov 29 06:16:21 compute-0 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: 2025-11-29T06:16:21.016+0000 7fa614c10140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 29 06:16:21 compute-0 ceph-mgr[74948]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 29 06:16:21 compute-0 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: 2025-11-29T06:16:21.300+0000 7fa614c10140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 29 06:16:21 compute-0 ceph-mgr[74948]: mgr[py] Loading python module 'restful'
Nov 29 06:16:21 compute-0 ceph-mgr[74948]: mgr[py] Loading python module 'rgw'
Nov 29 06:16:22 compute-0 podman[75257]: 2025-11-29 06:16:22.643809314 +0000 UTC m=+0.048518232 container create a84ec53b56b69986a2d33f5ed147c519f7690aee237cb7bd7a0a89a69cf42195 (image=quay.io/ceph/ceph:v18, name=tender_dhawan, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 06:16:22 compute-0 systemd[1]: Started libpod-conmon-a84ec53b56b69986a2d33f5ed147c519f7690aee237cb7bd7a0a89a69cf42195.scope.
Nov 29 06:16:22 compute-0 ceph-mgr[74948]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 29 06:16:22 compute-0 ceph-mgr[74948]: mgr[py] Loading python module 'rook'
Nov 29 06:16:22 compute-0 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: 2025-11-29T06:16:22.691+0000 7fa614c10140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 29 06:16:22 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:16:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8a9b695cfdc7cb097cdde7d7e63128baa33f67d62e83b4a335b963f427ba4e6/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 06:16:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8a9b695cfdc7cb097cdde7d7e63128baa33f67d62e83b4a335b963f427ba4e6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:16:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8a9b695cfdc7cb097cdde7d7e63128baa33f67d62e83b4a335b963f427ba4e6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:16:22 compute-0 podman[75257]: 2025-11-29 06:16:22.62277153 +0000 UTC m=+0.027480458 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 06:16:22 compute-0 podman[75257]: 2025-11-29 06:16:22.727671583 +0000 UTC m=+0.132380571 container init a84ec53b56b69986a2d33f5ed147c519f7690aee237cb7bd7a0a89a69cf42195 (image=quay.io/ceph/ceph:v18, name=tender_dhawan, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 29 06:16:22 compute-0 podman[75257]: 2025-11-29 06:16:22.740542217 +0000 UTC m=+0.145251125 container start a84ec53b56b69986a2d33f5ed147c519f7690aee237cb7bd7a0a89a69cf42195 (image=quay.io/ceph/ceph:v18, name=tender_dhawan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 29 06:16:22 compute-0 podman[75257]: 2025-11-29 06:16:22.832864526 +0000 UTC m=+0.237573544 container attach a84ec53b56b69986a2d33f5ed147c519f7690aee237cb7bd7a0a89a69cf42195 (image=quay.io/ceph/ceph:v18, name=tender_dhawan, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 06:16:23 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 29 06:16:23 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1379746660' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 29 06:16:23 compute-0 tender_dhawan[75273]: 
Nov 29 06:16:23 compute-0 tender_dhawan[75273]: {
Nov 29 06:16:23 compute-0 tender_dhawan[75273]:     "fsid": "336ec58c-893b-528f-a0c1-6ed1196bc047",
Nov 29 06:16:23 compute-0 tender_dhawan[75273]:     "health": {
Nov 29 06:16:23 compute-0 tender_dhawan[75273]:         "status": "HEALTH_OK",
Nov 29 06:16:23 compute-0 tender_dhawan[75273]:         "checks": {},
Nov 29 06:16:23 compute-0 tender_dhawan[75273]:         "mutes": []
Nov 29 06:16:23 compute-0 tender_dhawan[75273]:     },
Nov 29 06:16:23 compute-0 tender_dhawan[75273]:     "election_epoch": 5,
Nov 29 06:16:23 compute-0 tender_dhawan[75273]:     "quorum": [
Nov 29 06:16:23 compute-0 tender_dhawan[75273]:         0
Nov 29 06:16:23 compute-0 tender_dhawan[75273]:     ],
Nov 29 06:16:23 compute-0 tender_dhawan[75273]:     "quorum_names": [
Nov 29 06:16:23 compute-0 tender_dhawan[75273]:         "compute-0"
Nov 29 06:16:23 compute-0 tender_dhawan[75273]:     ],
Nov 29 06:16:23 compute-0 tender_dhawan[75273]:     "quorum_age": 16,
Nov 29 06:16:23 compute-0 tender_dhawan[75273]:     "monmap": {
Nov 29 06:16:23 compute-0 tender_dhawan[75273]:         "epoch": 1,
Nov 29 06:16:23 compute-0 tender_dhawan[75273]:         "min_mon_release_name": "reef",
Nov 29 06:16:23 compute-0 tender_dhawan[75273]:         "num_mons": 1
Nov 29 06:16:23 compute-0 tender_dhawan[75273]:     },
Nov 29 06:16:23 compute-0 tender_dhawan[75273]:     "osdmap": {
Nov 29 06:16:23 compute-0 tender_dhawan[75273]:         "epoch": 1,
Nov 29 06:16:23 compute-0 tender_dhawan[75273]:         "num_osds": 0,
Nov 29 06:16:23 compute-0 tender_dhawan[75273]:         "num_up_osds": 0,
Nov 29 06:16:23 compute-0 tender_dhawan[75273]:         "osd_up_since": 0,
Nov 29 06:16:23 compute-0 tender_dhawan[75273]:         "num_in_osds": 0,
Nov 29 06:16:23 compute-0 tender_dhawan[75273]:         "osd_in_since": 0,
Nov 29 06:16:23 compute-0 tender_dhawan[75273]:         "num_remapped_pgs": 0
Nov 29 06:16:23 compute-0 tender_dhawan[75273]:     },
Nov 29 06:16:23 compute-0 tender_dhawan[75273]:     "pgmap": {
Nov 29 06:16:23 compute-0 tender_dhawan[75273]:         "pgs_by_state": [],
Nov 29 06:16:23 compute-0 tender_dhawan[75273]:         "num_pgs": 0,
Nov 29 06:16:23 compute-0 tender_dhawan[75273]:         "num_pools": 0,
Nov 29 06:16:23 compute-0 tender_dhawan[75273]:         "num_objects": 0,
Nov 29 06:16:23 compute-0 tender_dhawan[75273]:         "data_bytes": 0,
Nov 29 06:16:23 compute-0 tender_dhawan[75273]:         "bytes_used": 0,
Nov 29 06:16:23 compute-0 tender_dhawan[75273]:         "bytes_avail": 0,
Nov 29 06:16:23 compute-0 tender_dhawan[75273]:         "bytes_total": 0
Nov 29 06:16:23 compute-0 tender_dhawan[75273]:     },
Nov 29 06:16:23 compute-0 tender_dhawan[75273]:     "fsmap": {
Nov 29 06:16:23 compute-0 tender_dhawan[75273]:         "epoch": 1,
Nov 29 06:16:23 compute-0 tender_dhawan[75273]:         "by_rank": [],
Nov 29 06:16:23 compute-0 tender_dhawan[75273]:         "up:standby": 0
Nov 29 06:16:23 compute-0 tender_dhawan[75273]:     },
Nov 29 06:16:23 compute-0 tender_dhawan[75273]:     "mgrmap": {
Nov 29 06:16:23 compute-0 tender_dhawan[75273]:         "available": false,
Nov 29 06:16:23 compute-0 tender_dhawan[75273]:         "num_standbys": 0,
Nov 29 06:16:23 compute-0 tender_dhawan[75273]:         "modules": [
Nov 29 06:16:23 compute-0 tender_dhawan[75273]:             "iostat",
Nov 29 06:16:23 compute-0 tender_dhawan[75273]:             "nfs",
Nov 29 06:16:23 compute-0 tender_dhawan[75273]:             "restful"
Nov 29 06:16:23 compute-0 tender_dhawan[75273]:         ],
Nov 29 06:16:23 compute-0 tender_dhawan[75273]:         "services": {}
Nov 29 06:16:23 compute-0 tender_dhawan[75273]:     },
Nov 29 06:16:23 compute-0 tender_dhawan[75273]:     "servicemap": {
Nov 29 06:16:23 compute-0 tender_dhawan[75273]:         "epoch": 1,
Nov 29 06:16:23 compute-0 tender_dhawan[75273]:         "modified": "2025-11-29T06:16:03.952029+0000",
Nov 29 06:16:23 compute-0 tender_dhawan[75273]:         "services": {}
Nov 29 06:16:23 compute-0 tender_dhawan[75273]:     },
Nov 29 06:16:23 compute-0 tender_dhawan[75273]:     "progress_events": {}
Nov 29 06:16:23 compute-0 tender_dhawan[75273]: }
Nov 29 06:16:23 compute-0 systemd[1]: libpod-a84ec53b56b69986a2d33f5ed147c519f7690aee237cb7bd7a0a89a69cf42195.scope: Deactivated successfully.
Nov 29 06:16:23 compute-0 podman[75257]: 2025-11-29 06:16:23.152165237 +0000 UTC m=+0.556874145 container died a84ec53b56b69986a2d33f5ed147c519f7690aee237cb7bd7a0a89a69cf42195 (image=quay.io/ceph/ceph:v18, name=tender_dhawan, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 06:16:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-a8a9b695cfdc7cb097cdde7d7e63128baa33f67d62e83b4a335b963f427ba4e6-merged.mount: Deactivated successfully.
Nov 29 06:16:23 compute-0 ceph-mon[74654]: from='client.? 192.168.122.100:0/1379746660' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 29 06:16:23 compute-0 podman[75257]: 2025-11-29 06:16:23.194140303 +0000 UTC m=+0.598849211 container remove a84ec53b56b69986a2d33f5ed147c519f7690aee237cb7bd7a0a89a69cf42195 (image=quay.io/ceph/ceph:v18, name=tender_dhawan, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 06:16:23 compute-0 systemd[1]: libpod-conmon-a84ec53b56b69986a2d33f5ed147c519f7690aee237cb7bd7a0a89a69cf42195.scope: Deactivated successfully.
Nov 29 06:16:24 compute-0 ceph-mgr[74948]: mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 29 06:16:24 compute-0 ceph-mgr[74948]: mgr[py] Loading python module 'selftest'
Nov 29 06:16:24 compute-0 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: 2025-11-29T06:16:24.640+0000 7fa614c10140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 29 06:16:24 compute-0 ceph-mgr[74948]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 29 06:16:24 compute-0 ceph-mgr[74948]: mgr[py] Loading python module 'snap_schedule'
Nov 29 06:16:24 compute-0 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: 2025-11-29T06:16:24.860+0000 7fa614c10140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 29 06:16:25 compute-0 ceph-mgr[74948]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 29 06:16:25 compute-0 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: 2025-11-29T06:16:25.086+0000 7fa614c10140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 29 06:16:25 compute-0 ceph-mgr[74948]: mgr[py] Loading python module 'stats'
Nov 29 06:16:25 compute-0 podman[75313]: 2025-11-29 06:16:25.270800386 +0000 UTC m=+0.054140310 container create 3f080c16850f875f2f2efbc7c16a32d1e7cee7a323349d9e7ce88e44b331e925 (image=quay.io/ceph/ceph:v18, name=hopeful_curran, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 29 06:16:25 compute-0 ceph-mgr[74948]: mgr[py] Loading python module 'status'
Nov 29 06:16:25 compute-0 systemd[1]: Started libpod-conmon-3f080c16850f875f2f2efbc7c16a32d1e7cee7a323349d9e7ce88e44b331e925.scope.
Nov 29 06:16:25 compute-0 podman[75313]: 2025-11-29 06:16:25.24157297 +0000 UTC m=+0.024912914 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 06:16:25 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:16:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc76f32ebfd43d2a3429ad29a9eade799314b2efb0719f4ae62469cadf33f921/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:16:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc76f32ebfd43d2a3429ad29a9eade799314b2efb0719f4ae62469cadf33f921/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:16:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc76f32ebfd43d2a3429ad29a9eade799314b2efb0719f4ae62469cadf33f921/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 06:16:25 compute-0 podman[75313]: 2025-11-29 06:16:25.372340715 +0000 UTC m=+0.155680659 container init 3f080c16850f875f2f2efbc7c16a32d1e7cee7a323349d9e7ce88e44b331e925 (image=quay.io/ceph/ceph:v18, name=hopeful_curran, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 29 06:16:25 compute-0 podman[75313]: 2025-11-29 06:16:25.378618643 +0000 UTC m=+0.161958577 container start 3f080c16850f875f2f2efbc7c16a32d1e7cee7a323349d9e7ce88e44b331e925 (image=quay.io/ceph/ceph:v18, name=hopeful_curran, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 29 06:16:25 compute-0 podman[75313]: 2025-11-29 06:16:25.383373817 +0000 UTC m=+0.166713771 container attach 3f080c16850f875f2f2efbc7c16a32d1e7cee7a323349d9e7ce88e44b331e925 (image=quay.io/ceph/ceph:v18, name=hopeful_curran, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 06:16:25 compute-0 ceph-mgr[74948]: mgr[py] Module status has missing NOTIFY_TYPES member
Nov 29 06:16:25 compute-0 ceph-mgr[74948]: mgr[py] Loading python module 'telegraf'
Nov 29 06:16:25 compute-0 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: 2025-11-29T06:16:25.577+0000 7fa614c10140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Nov 29 06:16:25 compute-0 ceph-mgr[74948]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 29 06:16:25 compute-0 ceph-mgr[74948]: mgr[py] Loading python module 'telemetry'
Nov 29 06:16:25 compute-0 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: 2025-11-29T06:16:25.843+0000 7fa614c10140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 29 06:16:25 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 29 06:16:25 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/378257284' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 29 06:16:25 compute-0 hopeful_curran[75330]: 
Nov 29 06:16:25 compute-0 hopeful_curran[75330]: {
Nov 29 06:16:25 compute-0 hopeful_curran[75330]:     "fsid": "336ec58c-893b-528f-a0c1-6ed1196bc047",
Nov 29 06:16:25 compute-0 hopeful_curran[75330]:     "health": {
Nov 29 06:16:25 compute-0 hopeful_curran[75330]:         "status": "HEALTH_OK",
Nov 29 06:16:25 compute-0 hopeful_curran[75330]:         "checks": {},
Nov 29 06:16:25 compute-0 hopeful_curran[75330]:         "mutes": []
Nov 29 06:16:25 compute-0 hopeful_curran[75330]:     },
Nov 29 06:16:25 compute-0 hopeful_curran[75330]:     "election_epoch": 5,
Nov 29 06:16:25 compute-0 hopeful_curran[75330]:     "quorum": [
Nov 29 06:16:25 compute-0 hopeful_curran[75330]:         0
Nov 29 06:16:25 compute-0 hopeful_curran[75330]:     ],
Nov 29 06:16:25 compute-0 hopeful_curran[75330]:     "quorum_names": [
Nov 29 06:16:25 compute-0 hopeful_curran[75330]:         "compute-0"
Nov 29 06:16:25 compute-0 hopeful_curran[75330]:     ],
Nov 29 06:16:25 compute-0 hopeful_curran[75330]:     "quorum_age": 18,
Nov 29 06:16:25 compute-0 hopeful_curran[75330]:     "monmap": {
Nov 29 06:16:25 compute-0 hopeful_curran[75330]:         "epoch": 1,
Nov 29 06:16:25 compute-0 hopeful_curran[75330]:         "min_mon_release_name": "reef",
Nov 29 06:16:25 compute-0 hopeful_curran[75330]:         "num_mons": 1
Nov 29 06:16:25 compute-0 hopeful_curran[75330]:     },
Nov 29 06:16:25 compute-0 hopeful_curran[75330]:     "osdmap": {
Nov 29 06:16:25 compute-0 hopeful_curran[75330]:         "epoch": 1,
Nov 29 06:16:25 compute-0 hopeful_curran[75330]:         "num_osds": 0,
Nov 29 06:16:25 compute-0 hopeful_curran[75330]:         "num_up_osds": 0,
Nov 29 06:16:25 compute-0 hopeful_curran[75330]:         "osd_up_since": 0,
Nov 29 06:16:25 compute-0 hopeful_curran[75330]:         "num_in_osds": 0,
Nov 29 06:16:25 compute-0 hopeful_curran[75330]:         "osd_in_since": 0,
Nov 29 06:16:25 compute-0 hopeful_curran[75330]:         "num_remapped_pgs": 0
Nov 29 06:16:25 compute-0 hopeful_curran[75330]:     },
Nov 29 06:16:25 compute-0 hopeful_curran[75330]:     "pgmap": {
Nov 29 06:16:25 compute-0 hopeful_curran[75330]:         "pgs_by_state": [],
Nov 29 06:16:25 compute-0 hopeful_curran[75330]:         "num_pgs": 0,
Nov 29 06:16:25 compute-0 hopeful_curran[75330]:         "num_pools": 0,
Nov 29 06:16:25 compute-0 hopeful_curran[75330]:         "num_objects": 0,
Nov 29 06:16:25 compute-0 hopeful_curran[75330]:         "data_bytes": 0,
Nov 29 06:16:25 compute-0 hopeful_curran[75330]:         "bytes_used": 0,
Nov 29 06:16:25 compute-0 hopeful_curran[75330]:         "bytes_avail": 0,
Nov 29 06:16:25 compute-0 hopeful_curran[75330]:         "bytes_total": 0
Nov 29 06:16:25 compute-0 hopeful_curran[75330]:     },
Nov 29 06:16:25 compute-0 hopeful_curran[75330]:     "fsmap": {
Nov 29 06:16:25 compute-0 hopeful_curran[75330]:         "epoch": 1,
Nov 29 06:16:25 compute-0 hopeful_curran[75330]:         "by_rank": [],
Nov 29 06:16:25 compute-0 hopeful_curran[75330]:         "up:standby": 0
Nov 29 06:16:25 compute-0 hopeful_curran[75330]:     },
Nov 29 06:16:25 compute-0 hopeful_curran[75330]:     "mgrmap": {
Nov 29 06:16:25 compute-0 hopeful_curran[75330]:         "available": false,
Nov 29 06:16:25 compute-0 hopeful_curran[75330]:         "num_standbys": 0,
Nov 29 06:16:25 compute-0 hopeful_curran[75330]:         "modules": [
Nov 29 06:16:25 compute-0 hopeful_curran[75330]:             "iostat",
Nov 29 06:16:25 compute-0 hopeful_curran[75330]:             "nfs",
Nov 29 06:16:25 compute-0 hopeful_curran[75330]:             "restful"
Nov 29 06:16:25 compute-0 hopeful_curran[75330]:         ],
Nov 29 06:16:25 compute-0 hopeful_curran[75330]:         "services": {}
Nov 29 06:16:25 compute-0 hopeful_curran[75330]:     },
Nov 29 06:16:25 compute-0 hopeful_curran[75330]:     "servicemap": {
Nov 29 06:16:25 compute-0 hopeful_curran[75330]:         "epoch": 1,
Nov 29 06:16:25 compute-0 hopeful_curran[75330]:         "modified": "2025-11-29T06:16:03.952029+0000",
Nov 29 06:16:25 compute-0 hopeful_curran[75330]:         "services": {}
Nov 29 06:16:25 compute-0 hopeful_curran[75330]:     },
Nov 29 06:16:25 compute-0 hopeful_curran[75330]:     "progress_events": {}
Nov 29 06:16:25 compute-0 hopeful_curran[75330]: }
Nov 29 06:16:25 compute-0 systemd[1]: libpod-3f080c16850f875f2f2efbc7c16a32d1e7cee7a323349d9e7ce88e44b331e925.scope: Deactivated successfully.
Nov 29 06:16:25 compute-0 podman[75313]: 2025-11-29 06:16:25.866311812 +0000 UTC m=+0.649651746 container died 3f080c16850f875f2f2efbc7c16a32d1e7cee7a323349d9e7ce88e44b331e925 (image=quay.io/ceph/ceph:v18, name=hopeful_curran, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 06:16:25 compute-0 ceph-mon[74654]: from='client.? 192.168.122.100:0/378257284' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 29 06:16:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-cc76f32ebfd43d2a3429ad29a9eade799314b2efb0719f4ae62469cadf33f921-merged.mount: Deactivated successfully.
Nov 29 06:16:25 compute-0 podman[75313]: 2025-11-29 06:16:25.933521181 +0000 UTC m=+0.716861115 container remove 3f080c16850f875f2f2efbc7c16a32d1e7cee7a323349d9e7ce88e44b331e925 (image=quay.io/ceph/ceph:v18, name=hopeful_curran, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 29 06:16:25 compute-0 systemd[1]: libpod-conmon-3f080c16850f875f2f2efbc7c16a32d1e7cee7a323349d9e7ce88e44b331e925.scope: Deactivated successfully.
Nov 29 06:16:26 compute-0 ceph-mgr[74948]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 29 06:16:26 compute-0 ceph-mgr[74948]: mgr[py] Loading python module 'test_orchestrator'
Nov 29 06:16:26 compute-0 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: 2025-11-29T06:16:26.408+0000 7fa614c10140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 29 06:16:27 compute-0 ceph-mgr[74948]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 29 06:16:27 compute-0 ceph-mgr[74948]: mgr[py] Loading python module 'volumes'
Nov 29 06:16:27 compute-0 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: 2025-11-29T06:16:27.020+0000 7fa614c10140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 29 06:16:27 compute-0 ceph-mgr[74948]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 29 06:16:27 compute-0 ceph-mgr[74948]: mgr[py] Loading python module 'zabbix'
Nov 29 06:16:27 compute-0 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: 2025-11-29T06:16:27.689+0000 7fa614c10140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 29 06:16:27 compute-0 ceph-mgr[74948]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 29 06:16:27 compute-0 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: 2025-11-29T06:16:27.909+0000 7fa614c10140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 29 06:16:27 compute-0 ceph-mgr[74948]: ms_deliver_dispatch: unhandled message 0x55dc33b48f20 mon_map magic: 0 v1 from mon.0 v2:192.168.122.100:3300/0
Nov 29 06:16:27 compute-0 ceph-mon[74654]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.vxabpq
Nov 29 06:16:28 compute-0 podman[75369]: 2025-11-29 06:16:28.00826553 +0000 UTC m=+0.041511274 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 06:16:29 compute-0 podman[75369]: 2025-11-29 06:16:29.199079606 +0000 UTC m=+1.232325260 container create 7c56bc0b4df2186584d046c1e8839481ae8511610c867d6d0eb138777f9b05aa (image=quay.io/ceph/ceph:v18, name=suspicious_allen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 29 06:16:29 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : mgrmap e2: compute-0.vxabpq(active, starting, since 1.29032s)
Nov 29 06:16:29 compute-0 ceph-mgr[74948]: mgr handle_mgr_map Activating!
Nov 29 06:16:29 compute-0 ceph-mgr[74948]: mgr handle_mgr_map I am now activating
Nov 29 06:16:29 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0) v1
Nov 29 06:16:29 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/2747328161' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "mds metadata"}]: dispatch
Nov 29 06:16:29 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).mds e1 all = 1
Nov 29 06:16:29 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Nov 29 06:16:29 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/2747328161' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 29 06:16:29 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0) v1
Nov 29 06:16:29 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/2747328161' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "mon metadata"}]: dispatch
Nov 29 06:16:29 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Nov 29 06:16:29 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/2747328161' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 29 06:16:29 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.vxabpq", "id": "compute-0.vxabpq"} v 0) v1
Nov 29 06:16:29 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/2747328161' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "mgr metadata", "who": "compute-0.vxabpq", "id": "compute-0.vxabpq"}]: dispatch
Nov 29 06:16:29 compute-0 ceph-mgr[74948]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 06:16:29 compute-0 ceph-mgr[74948]: mgr load Constructed class from module: balancer
Nov 29 06:16:29 compute-0 ceph-mgr[74948]: [balancer INFO root] Starting
Nov 29 06:16:29 compute-0 ceph-mgr[74948]: [balancer INFO root] Optimize plan auto_2025-11-29_06:16:29
Nov 29 06:16:29 compute-0 ceph-mgr[74948]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 06:16:29 compute-0 ceph-mgr[74948]: [balancer INFO root] do_upmap
Nov 29 06:16:29 compute-0 ceph-mgr[74948]: [balancer INFO root] No pools available
Nov 29 06:16:29 compute-0 ceph-mon[74654]: log_channel(cluster) log [INF] : Manager daemon compute-0.vxabpq is now available
Nov 29 06:16:29 compute-0 ceph-mgr[74948]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 06:16:29 compute-0 ceph-mgr[74948]: mgr load Constructed class from module: crash
Nov 29 06:16:29 compute-0 ceph-mgr[74948]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 06:16:29 compute-0 ceph-mgr[74948]: mgr load Constructed class from module: devicehealth
Nov 29 06:16:29 compute-0 ceph-mgr[74948]: [devicehealth INFO root] Starting
Nov 29 06:16:29 compute-0 ceph-mgr[74948]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 06:16:29 compute-0 ceph-mgr[74948]: mgr load Constructed class from module: iostat
Nov 29 06:16:29 compute-0 ceph-mgr[74948]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 06:16:29 compute-0 ceph-mgr[74948]: mgr load Constructed class from module: nfs
Nov 29 06:16:29 compute-0 ceph-mgr[74948]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 06:16:29 compute-0 ceph-mgr[74948]: mgr load Constructed class from module: orchestrator
Nov 29 06:16:29 compute-0 ceph-mgr[74948]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 06:16:29 compute-0 ceph-mgr[74948]: mgr load Constructed class from module: pg_autoscaler
Nov 29 06:16:29 compute-0 ceph-mgr[74948]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 06:16:29 compute-0 ceph-mgr[74948]: mgr load Constructed class from module: progress
Nov 29 06:16:29 compute-0 ceph-mgr[74948]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 06:16:29 compute-0 ceph-mgr[74948]: [progress INFO root] Loading...
Nov 29 06:16:29 compute-0 ceph-mgr[74948]: [progress INFO root] No stored events to load
Nov 29 06:16:29 compute-0 ceph-mgr[74948]: [progress INFO root] Loaded [] historic events
Nov 29 06:16:29 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 06:16:29 compute-0 ceph-mgr[74948]: [progress INFO root] Loaded OSDMap, ready.
Nov 29 06:16:29 compute-0 systemd[1]: Started libpod-conmon-7c56bc0b4df2186584d046c1e8839481ae8511610c867d6d0eb138777f9b05aa.scope.
Nov 29 06:16:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] recovery thread starting
Nov 29 06:16:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] starting setup
Nov 29 06:16:29 compute-0 ceph-mgr[74948]: mgr load Constructed class from module: rbd_support
Nov 29 06:16:29 compute-0 ceph-mon[74654]: Activating manager daemon compute-0.vxabpq
Nov 29 06:16:29 compute-0 ceph-mon[74654]: mgrmap e2: compute-0.vxabpq(active, starting, since 1.29032s)
Nov 29 06:16:29 compute-0 ceph-mon[74654]: from='mgr.14102 192.168.122.100:0/2747328161' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "mds metadata"}]: dispatch
Nov 29 06:16:29 compute-0 ceph-mon[74654]: from='mgr.14102 192.168.122.100:0/2747328161' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 29 06:16:29 compute-0 ceph-mon[74654]: from='mgr.14102 192.168.122.100:0/2747328161' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "mon metadata"}]: dispatch
Nov 29 06:16:29 compute-0 ceph-mon[74654]: from='mgr.14102 192.168.122.100:0/2747328161' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 29 06:16:29 compute-0 ceph-mon[74654]: from='mgr.14102 192.168.122.100:0/2747328161' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "mgr metadata", "who": "compute-0.vxabpq", "id": "compute-0.vxabpq"}]: dispatch
Nov 29 06:16:29 compute-0 ceph-mon[74654]: Manager daemon compute-0.vxabpq is now available
Nov 29 06:16:29 compute-0 ceph-mgr[74948]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 06:16:29 compute-0 ceph-mgr[74948]: mgr load Constructed class from module: restful
Nov 29 06:16:29 compute-0 ceph-mgr[74948]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 06:16:29 compute-0 ceph-mgr[74948]: mgr load Constructed class from module: status
Nov 29 06:16:29 compute-0 ceph-mgr[74948]: [restful INFO root] server_addr: :: server_port: 8003
Nov 29 06:16:29 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.vxabpq/mirror_snapshot_schedule"} v 0) v1
Nov 29 06:16:29 compute-0 ceph-mgr[74948]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 06:16:29 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/2747328161' entity='mgr.compute-0.vxabpq' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.vxabpq/mirror_snapshot_schedule"}]: dispatch
Nov 29 06:16:29 compute-0 ceph-mgr[74948]: mgr load Constructed class from module: telemetry
Nov 29 06:16:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 06:16:29 compute-0 ceph-mgr[74948]: [restful WARNING root] server not running: no certificate configured
Nov 29 06:16:29 compute-0 ceph-mgr[74948]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 06:16:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Nov 29 06:16:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] PerfHandler: starting
Nov 29 06:16:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] TaskHandler: starting
Nov 29 06:16:29 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/report_id}] v 0) v1
Nov 29 06:16:29 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.vxabpq/trash_purge_schedule"} v 0) v1
Nov 29 06:16:29 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/2747328161' entity='mgr.compute-0.vxabpq' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.vxabpq/trash_purge_schedule"}]: dispatch
Nov 29 06:16:29 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/2747328161' entity='mgr.compute-0.vxabpq' 
Nov 29 06:16:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 06:16:29 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/salt}] v 0) v1
Nov 29 06:16:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Nov 29 06:16:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] setup complete
Nov 29 06:16:29 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/2747328161' entity='mgr.compute-0.vxabpq' 
Nov 29 06:16:29 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:16:29 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/collection}] v 0) v1
Nov 29 06:16:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a27bb6159ff2abe455b10f69e2dac2dd1a4164f53be4474987593d0d9b95cbbe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:16:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a27bb6159ff2abe455b10f69e2dac2dd1a4164f53be4474987593d0d9b95cbbe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:16:29 compute-0 ceph-mgr[74948]: mgr load Constructed class from module: volumes
Nov 29 06:16:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a27bb6159ff2abe455b10f69e2dac2dd1a4164f53be4474987593d0d9b95cbbe/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 06:16:29 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/2747328161' entity='mgr.compute-0.vxabpq' 
Nov 29 06:16:29 compute-0 podman[75369]: 2025-11-29 06:16:29.295731977 +0000 UTC m=+1.328977651 container init 7c56bc0b4df2186584d046c1e8839481ae8511610c867d6d0eb138777f9b05aa (image=quay.io/ceph/ceph:v18, name=suspicious_allen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 06:16:29 compute-0 podman[75369]: 2025-11-29 06:16:29.304746831 +0000 UTC m=+1.337992485 container start 7c56bc0b4df2186584d046c1e8839481ae8511610c867d6d0eb138777f9b05aa (image=quay.io/ceph/ceph:v18, name=suspicious_allen, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 06:16:29 compute-0 podman[75369]: 2025-11-29 06:16:29.308694733 +0000 UTC m=+1.341940437 container attach 7c56bc0b4df2186584d046c1e8839481ae8511610c867d6d0eb138777f9b05aa (image=quay.io/ceph/ceph:v18, name=suspicious_allen, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 29 06:16:29 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 29 06:16:29 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2668231799' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 29 06:16:29 compute-0 suspicious_allen[75419]: 
Nov 29 06:16:29 compute-0 suspicious_allen[75419]: {
Nov 29 06:16:29 compute-0 suspicious_allen[75419]:     "fsid": "336ec58c-893b-528f-a0c1-6ed1196bc047",
Nov 29 06:16:29 compute-0 suspicious_allen[75419]:     "health": {
Nov 29 06:16:29 compute-0 suspicious_allen[75419]:         "status": "HEALTH_OK",
Nov 29 06:16:29 compute-0 suspicious_allen[75419]:         "checks": {},
Nov 29 06:16:29 compute-0 suspicious_allen[75419]:         "mutes": []
Nov 29 06:16:29 compute-0 suspicious_allen[75419]:     },
Nov 29 06:16:29 compute-0 suspicious_allen[75419]:     "election_epoch": 5,
Nov 29 06:16:29 compute-0 suspicious_allen[75419]:     "quorum": [
Nov 29 06:16:29 compute-0 suspicious_allen[75419]:         0
Nov 29 06:16:29 compute-0 suspicious_allen[75419]:     ],
Nov 29 06:16:29 compute-0 suspicious_allen[75419]:     "quorum_names": [
Nov 29 06:16:29 compute-0 suspicious_allen[75419]:         "compute-0"
Nov 29 06:16:29 compute-0 suspicious_allen[75419]:     ],
Nov 29 06:16:29 compute-0 suspicious_allen[75419]:     "quorum_age": 22,
Nov 29 06:16:29 compute-0 suspicious_allen[75419]:     "monmap": {
Nov 29 06:16:29 compute-0 suspicious_allen[75419]:         "epoch": 1,
Nov 29 06:16:29 compute-0 suspicious_allen[75419]:         "min_mon_release_name": "reef",
Nov 29 06:16:29 compute-0 suspicious_allen[75419]:         "num_mons": 1
Nov 29 06:16:29 compute-0 suspicious_allen[75419]:     },
Nov 29 06:16:29 compute-0 suspicious_allen[75419]:     "osdmap": {
Nov 29 06:16:29 compute-0 suspicious_allen[75419]:         "epoch": 1,
Nov 29 06:16:29 compute-0 suspicious_allen[75419]:         "num_osds": 0,
Nov 29 06:16:29 compute-0 suspicious_allen[75419]:         "num_up_osds": 0,
Nov 29 06:16:29 compute-0 suspicious_allen[75419]:         "osd_up_since": 0,
Nov 29 06:16:29 compute-0 suspicious_allen[75419]:         "num_in_osds": 0,
Nov 29 06:16:29 compute-0 suspicious_allen[75419]:         "osd_in_since": 0,
Nov 29 06:16:29 compute-0 suspicious_allen[75419]:         "num_remapped_pgs": 0
Nov 29 06:16:29 compute-0 suspicious_allen[75419]:     },
Nov 29 06:16:29 compute-0 suspicious_allen[75419]:     "pgmap": {
Nov 29 06:16:29 compute-0 suspicious_allen[75419]:         "pgs_by_state": [],
Nov 29 06:16:29 compute-0 suspicious_allen[75419]:         "num_pgs": 0,
Nov 29 06:16:29 compute-0 suspicious_allen[75419]:         "num_pools": 0,
Nov 29 06:16:29 compute-0 suspicious_allen[75419]:         "num_objects": 0,
Nov 29 06:16:29 compute-0 suspicious_allen[75419]:         "data_bytes": 0,
Nov 29 06:16:29 compute-0 suspicious_allen[75419]:         "bytes_used": 0,
Nov 29 06:16:29 compute-0 suspicious_allen[75419]:         "bytes_avail": 0,
Nov 29 06:16:29 compute-0 suspicious_allen[75419]:         "bytes_total": 0
Nov 29 06:16:29 compute-0 suspicious_allen[75419]:     },
Nov 29 06:16:29 compute-0 suspicious_allen[75419]:     "fsmap": {
Nov 29 06:16:29 compute-0 suspicious_allen[75419]:         "epoch": 1,
Nov 29 06:16:29 compute-0 suspicious_allen[75419]:         "by_rank": [],
Nov 29 06:16:29 compute-0 suspicious_allen[75419]:         "up:standby": 0
Nov 29 06:16:29 compute-0 suspicious_allen[75419]:     },
Nov 29 06:16:29 compute-0 suspicious_allen[75419]:     "mgrmap": {
Nov 29 06:16:29 compute-0 suspicious_allen[75419]:         "available": false,
Nov 29 06:16:29 compute-0 suspicious_allen[75419]:         "num_standbys": 0,
Nov 29 06:16:29 compute-0 suspicious_allen[75419]:         "modules": [
Nov 29 06:16:29 compute-0 suspicious_allen[75419]:             "iostat",
Nov 29 06:16:29 compute-0 suspicious_allen[75419]:             "nfs",
Nov 29 06:16:29 compute-0 suspicious_allen[75419]:             "restful"
Nov 29 06:16:29 compute-0 suspicious_allen[75419]:         ],
Nov 29 06:16:29 compute-0 suspicious_allen[75419]:         "services": {}
Nov 29 06:16:29 compute-0 suspicious_allen[75419]:     },
Nov 29 06:16:29 compute-0 suspicious_allen[75419]:     "servicemap": {
Nov 29 06:16:29 compute-0 suspicious_allen[75419]:         "epoch": 1,
Nov 29 06:16:29 compute-0 suspicious_allen[75419]:         "modified": "2025-11-29T06:16:03.952029+0000",
Nov 29 06:16:29 compute-0 suspicious_allen[75419]:         "services": {}
Nov 29 06:16:29 compute-0 suspicious_allen[75419]:     },
Nov 29 06:16:29 compute-0 suspicious_allen[75419]:     "progress_events": {}
Nov 29 06:16:29 compute-0 suspicious_allen[75419]: }
Nov 29 06:16:29 compute-0 systemd[1]: libpod-7c56bc0b4df2186584d046c1e8839481ae8511610c867d6d0eb138777f9b05aa.scope: Deactivated successfully.
Nov 29 06:16:29 compute-0 conmon[75419]: conmon 7c56bc0b4df2186584d0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7c56bc0b4df2186584d046c1e8839481ae8511610c867d6d0eb138777f9b05aa.scope/container/memory.events
Nov 29 06:16:29 compute-0 podman[75369]: 2025-11-29 06:16:29.736421908 +0000 UTC m=+1.769667602 container died 7c56bc0b4df2186584d046c1e8839481ae8511610c867d6d0eb138777f9b05aa (image=quay.io/ceph/ceph:v18, name=suspicious_allen, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 29 06:16:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-a27bb6159ff2abe455b10f69e2dac2dd1a4164f53be4474987593d0d9b95cbbe-merged.mount: Deactivated successfully.
Nov 29 06:16:29 compute-0 podman[75369]: 2025-11-29 06:16:29.795634971 +0000 UTC m=+1.828880655 container remove 7c56bc0b4df2186584d046c1e8839481ae8511610c867d6d0eb138777f9b05aa (image=quay.io/ceph/ceph:v18, name=suspicious_allen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True)
Nov 29 06:16:29 compute-0 systemd[1]: libpod-conmon-7c56bc0b4df2186584d046c1e8839481ae8511610c867d6d0eb138777f9b05aa.scope: Deactivated successfully.
Nov 29 06:16:30 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : mgrmap e3: compute-0.vxabpq(active, since 2s)
Nov 29 06:16:30 compute-0 ceph-mon[74654]: from='mgr.14102 192.168.122.100:0/2747328161' entity='mgr.compute-0.vxabpq' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.vxabpq/mirror_snapshot_schedule"}]: dispatch
Nov 29 06:16:30 compute-0 ceph-mon[74654]: from='mgr.14102 192.168.122.100:0/2747328161' entity='mgr.compute-0.vxabpq' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.vxabpq/trash_purge_schedule"}]: dispatch
Nov 29 06:16:30 compute-0 ceph-mon[74654]: from='mgr.14102 192.168.122.100:0/2747328161' entity='mgr.compute-0.vxabpq' 
Nov 29 06:16:30 compute-0 ceph-mon[74654]: from='mgr.14102 192.168.122.100:0/2747328161' entity='mgr.compute-0.vxabpq' 
Nov 29 06:16:30 compute-0 ceph-mon[74654]: from='mgr.14102 192.168.122.100:0/2747328161' entity='mgr.compute-0.vxabpq' 
Nov 29 06:16:30 compute-0 ceph-mon[74654]: from='client.? 192.168.122.100:0/2668231799' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 29 06:16:30 compute-0 ceph-mon[74654]: mgrmap e3: compute-0.vxabpq(active, since 2s)
Nov 29 06:16:31 compute-0 ceph-mgr[74948]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 29 06:16:31 compute-0 podman[75502]: 2025-11-29 06:16:31.897412376 +0000 UTC m=+0.068094965 container create e610ea1f19fe5e072465d55cbb1b15b4004e795bef3b4f54d9bc294e47cba538 (image=quay.io/ceph/ceph:v18, name=hungry_gagarin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 06:16:31 compute-0 systemd[1]: Started libpod-conmon-e610ea1f19fe5e072465d55cbb1b15b4004e795bef3b4f54d9bc294e47cba538.scope.
Nov 29 06:16:31 compute-0 podman[75502]: 2025-11-29 06:16:31.8710207 +0000 UTC m=+0.041703299 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 06:16:31 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:16:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9940341278fa9575df8df96f2ec704781775be8f84fbb24e0bf586ad1e3aea37/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:16:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9940341278fa9575df8df96f2ec704781775be8f84fbb24e0bf586ad1e3aea37/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 06:16:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9940341278fa9575df8df96f2ec704781775be8f84fbb24e0bf586ad1e3aea37/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:16:32 compute-0 podman[75502]: 2025-11-29 06:16:32.008762382 +0000 UTC m=+0.179444981 container init e610ea1f19fe5e072465d55cbb1b15b4004e795bef3b4f54d9bc294e47cba538 (image=quay.io/ceph/ceph:v18, name=hungry_gagarin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 06:16:32 compute-0 podman[75502]: 2025-11-29 06:16:32.01825231 +0000 UTC m=+0.188934899 container start e610ea1f19fe5e072465d55cbb1b15b4004e795bef3b4f54d9bc294e47cba538 (image=quay.io/ceph/ceph:v18, name=hungry_gagarin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 06:16:32 compute-0 podman[75502]: 2025-11-29 06:16:32.022593172 +0000 UTC m=+0.193275771 container attach e610ea1f19fe5e072465d55cbb1b15b4004e795bef3b4f54d9bc294e47cba538 (image=quay.io/ceph/ceph:v18, name=hungry_gagarin, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 29 06:16:32 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 29 06:16:32 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2531602317' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 29 06:16:32 compute-0 hungry_gagarin[75518]: 
Nov 29 06:16:32 compute-0 hungry_gagarin[75518]: {
Nov 29 06:16:32 compute-0 hungry_gagarin[75518]:     "fsid": "336ec58c-893b-528f-a0c1-6ed1196bc047",
Nov 29 06:16:32 compute-0 hungry_gagarin[75518]:     "health": {
Nov 29 06:16:32 compute-0 hungry_gagarin[75518]:         "status": "HEALTH_OK",
Nov 29 06:16:32 compute-0 hungry_gagarin[75518]:         "checks": {},
Nov 29 06:16:32 compute-0 hungry_gagarin[75518]:         "mutes": []
Nov 29 06:16:32 compute-0 hungry_gagarin[75518]:     },
Nov 29 06:16:32 compute-0 hungry_gagarin[75518]:     "election_epoch": 5,
Nov 29 06:16:32 compute-0 hungry_gagarin[75518]:     "quorum": [
Nov 29 06:16:32 compute-0 hungry_gagarin[75518]:         0
Nov 29 06:16:32 compute-0 hungry_gagarin[75518]:     ],
Nov 29 06:16:32 compute-0 hungry_gagarin[75518]:     "quorum_names": [
Nov 29 06:16:32 compute-0 hungry_gagarin[75518]:         "compute-0"
Nov 29 06:16:32 compute-0 hungry_gagarin[75518]:     ],
Nov 29 06:16:32 compute-0 hungry_gagarin[75518]:     "quorum_age": 25,
Nov 29 06:16:32 compute-0 hungry_gagarin[75518]:     "monmap": {
Nov 29 06:16:32 compute-0 hungry_gagarin[75518]:         "epoch": 1,
Nov 29 06:16:32 compute-0 hungry_gagarin[75518]:         "min_mon_release_name": "reef",
Nov 29 06:16:32 compute-0 hungry_gagarin[75518]:         "num_mons": 1
Nov 29 06:16:32 compute-0 hungry_gagarin[75518]:     },
Nov 29 06:16:32 compute-0 hungry_gagarin[75518]:     "osdmap": {
Nov 29 06:16:32 compute-0 hungry_gagarin[75518]:         "epoch": 1,
Nov 29 06:16:32 compute-0 hungry_gagarin[75518]:         "num_osds": 0,
Nov 29 06:16:32 compute-0 hungry_gagarin[75518]:         "num_up_osds": 0,
Nov 29 06:16:32 compute-0 hungry_gagarin[75518]:         "osd_up_since": 0,
Nov 29 06:16:32 compute-0 hungry_gagarin[75518]:         "num_in_osds": 0,
Nov 29 06:16:32 compute-0 hungry_gagarin[75518]:         "osd_in_since": 0,
Nov 29 06:16:32 compute-0 hungry_gagarin[75518]:         "num_remapped_pgs": 0
Nov 29 06:16:32 compute-0 hungry_gagarin[75518]:     },
Nov 29 06:16:32 compute-0 hungry_gagarin[75518]:     "pgmap": {
Nov 29 06:16:32 compute-0 hungry_gagarin[75518]:         "pgs_by_state": [],
Nov 29 06:16:32 compute-0 hungry_gagarin[75518]:         "num_pgs": 0,
Nov 29 06:16:32 compute-0 hungry_gagarin[75518]:         "num_pools": 0,
Nov 29 06:16:32 compute-0 hungry_gagarin[75518]:         "num_objects": 0,
Nov 29 06:16:32 compute-0 hungry_gagarin[75518]:         "data_bytes": 0,
Nov 29 06:16:32 compute-0 hungry_gagarin[75518]:         "bytes_used": 0,
Nov 29 06:16:32 compute-0 hungry_gagarin[75518]:         "bytes_avail": 0,
Nov 29 06:16:32 compute-0 hungry_gagarin[75518]:         "bytes_total": 0
Nov 29 06:16:32 compute-0 hungry_gagarin[75518]:     },
Nov 29 06:16:32 compute-0 hungry_gagarin[75518]:     "fsmap": {
Nov 29 06:16:32 compute-0 hungry_gagarin[75518]:         "epoch": 1,
Nov 29 06:16:32 compute-0 hungry_gagarin[75518]:         "by_rank": [],
Nov 29 06:16:32 compute-0 hungry_gagarin[75518]:         "up:standby": 0
Nov 29 06:16:32 compute-0 hungry_gagarin[75518]:     },
Nov 29 06:16:32 compute-0 hungry_gagarin[75518]:     "mgrmap": {
Nov 29 06:16:32 compute-0 hungry_gagarin[75518]:         "available": true,
Nov 29 06:16:32 compute-0 hungry_gagarin[75518]:         "num_standbys": 0,
Nov 29 06:16:32 compute-0 hungry_gagarin[75518]:         "modules": [
Nov 29 06:16:32 compute-0 hungry_gagarin[75518]:             "iostat",
Nov 29 06:16:32 compute-0 hungry_gagarin[75518]:             "nfs",
Nov 29 06:16:32 compute-0 hungry_gagarin[75518]:             "restful"
Nov 29 06:16:32 compute-0 hungry_gagarin[75518]:         ],
Nov 29 06:16:32 compute-0 hungry_gagarin[75518]:         "services": {}
Nov 29 06:16:32 compute-0 hungry_gagarin[75518]:     },
Nov 29 06:16:32 compute-0 hungry_gagarin[75518]:     "servicemap": {
Nov 29 06:16:32 compute-0 hungry_gagarin[75518]:         "epoch": 1,
Nov 29 06:16:32 compute-0 hungry_gagarin[75518]:         "modified": "2025-11-29T06:16:03.952029+0000",
Nov 29 06:16:32 compute-0 hungry_gagarin[75518]:         "services": {}
Nov 29 06:16:32 compute-0 hungry_gagarin[75518]:     },
Nov 29 06:16:32 compute-0 hungry_gagarin[75518]:     "progress_events": {}
Nov 29 06:16:32 compute-0 hungry_gagarin[75518]: }
Nov 29 06:16:32 compute-0 systemd[1]: libpod-e610ea1f19fe5e072465d55cbb1b15b4004e795bef3b4f54d9bc294e47cba538.scope: Deactivated successfully.
Nov 29 06:16:32 compute-0 podman[75502]: 2025-11-29 06:16:32.77680237 +0000 UTC m=+0.947484979 container died e610ea1f19fe5e072465d55cbb1b15b4004e795bef3b4f54d9bc294e47cba538 (image=quay.io/ceph/ceph:v18, name=hungry_gagarin, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 29 06:16:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-9940341278fa9575df8df96f2ec704781775be8f84fbb24e0bf586ad1e3aea37-merged.mount: Deactivated successfully.
Nov 29 06:16:32 compute-0 ceph-mon[74654]: from='client.? 192.168.122.100:0/2531602317' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 29 06:16:32 compute-0 podman[75502]: 2025-11-29 06:16:32.831014262 +0000 UTC m=+1.001696821 container remove e610ea1f19fe5e072465d55cbb1b15b4004e795bef3b4f54d9bc294e47cba538 (image=quay.io/ceph/ceph:v18, name=hungry_gagarin, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 06:16:32 compute-0 systemd[1]: libpod-conmon-e610ea1f19fe5e072465d55cbb1b15b4004e795bef3b4f54d9bc294e47cba538.scope: Deactivated successfully.
Nov 29 06:16:32 compute-0 podman[75556]: 2025-11-29 06:16:32.907213465 +0000 UTC m=+0.053903594 container create e1713de655ad7ca7ee723479cba6602d309177b883a2ac63cb3ef5df93e83cf3 (image=quay.io/ceph/ceph:v18, name=angry_ramanujan, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 06:16:32 compute-0 systemd[1]: Started libpod-conmon-e1713de655ad7ca7ee723479cba6602d309177b883a2ac63cb3ef5df93e83cf3.scope.
Nov 29 06:16:32 compute-0 podman[75556]: 2025-11-29 06:16:32.881009804 +0000 UTC m=+0.027700013 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 06:16:32 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:16:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00d0413133e46bcc5b864f36b6b44581db8e4e671339f93a17b6112df78a008c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:16:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00d0413133e46bcc5b864f36b6b44581db8e4e671339f93a17b6112df78a008c/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 06:16:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00d0413133e46bcc5b864f36b6b44581db8e4e671339f93a17b6112df78a008c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:16:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00d0413133e46bcc5b864f36b6b44581db8e4e671339f93a17b6112df78a008c/merged/var/lib/ceph/user.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:16:33 compute-0 podman[75556]: 2025-11-29 06:16:33.019770205 +0000 UTC m=+0.166460344 container init e1713de655ad7ca7ee723479cba6602d309177b883a2ac63cb3ef5df93e83cf3 (image=quay.io/ceph/ceph:v18, name=angry_ramanujan, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 06:16:33 compute-0 podman[75556]: 2025-11-29 06:16:33.028579774 +0000 UTC m=+0.175269883 container start e1713de655ad7ca7ee723479cba6602d309177b883a2ac63cb3ef5df93e83cf3 (image=quay.io/ceph/ceph:v18, name=angry_ramanujan, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 06:16:33 compute-0 podman[75556]: 2025-11-29 06:16:33.032489815 +0000 UTC m=+0.179180014 container attach e1713de655ad7ca7ee723479cba6602d309177b883a2ac63cb3ef5df93e83cf3 (image=quay.io/ceph/ceph:v18, name=angry_ramanujan, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 06:16:33 compute-0 ceph-mgr[74948]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 29 06:16:33 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Nov 29 06:16:33 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/60232043' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 29 06:16:33 compute-0 systemd[1]: libpod-e1713de655ad7ca7ee723479cba6602d309177b883a2ac63cb3ef5df93e83cf3.scope: Deactivated successfully.
Nov 29 06:16:33 compute-0 podman[75556]: 2025-11-29 06:16:33.582938377 +0000 UTC m=+0.729628576 container died e1713de655ad7ca7ee723479cba6602d309177b883a2ac63cb3ef5df93e83cf3 (image=quay.io/ceph/ceph:v18, name=angry_ramanujan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 29 06:16:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-00d0413133e46bcc5b864f36b6b44581db8e4e671339f93a17b6112df78a008c-merged.mount: Deactivated successfully.
Nov 29 06:16:33 compute-0 podman[75556]: 2025-11-29 06:16:33.635858552 +0000 UTC m=+0.782548661 container remove e1713de655ad7ca7ee723479cba6602d309177b883a2ac63cb3ef5df93e83cf3 (image=quay.io/ceph/ceph:v18, name=angry_ramanujan, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 06:16:33 compute-0 systemd[1]: libpod-conmon-e1713de655ad7ca7ee723479cba6602d309177b883a2ac63cb3ef5df93e83cf3.scope: Deactivated successfully.
Nov 29 06:16:33 compute-0 podman[75609]: 2025-11-29 06:16:33.740288333 +0000 UTC m=+0.059277306 container create a8bc4e2ebd7836cd1f1a3c01e044b9d8fa32d1d7561fe2c337124f2556bd8cd7 (image=quay.io/ceph/ceph:v18, name=vigilant_turing, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 29 06:16:33 compute-0 systemd[1]: Started libpod-conmon-a8bc4e2ebd7836cd1f1a3c01e044b9d8fa32d1d7561fe2c337124f2556bd8cd7.scope.
Nov 29 06:16:33 compute-0 podman[75609]: 2025-11-29 06:16:33.711698575 +0000 UTC m=+0.030687588 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 06:16:33 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:16:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a38b228cb168e92ee01a172477e02817d378a4fa99ef7fd48255bd9c0462bb38/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:16:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a38b228cb168e92ee01a172477e02817d378a4fa99ef7fd48255bd9c0462bb38/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:16:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a38b228cb168e92ee01a172477e02817d378a4fa99ef7fd48255bd9c0462bb38/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 06:16:33 compute-0 ceph-mon[74654]: from='client.? 192.168.122.100:0/60232043' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 29 06:16:33 compute-0 podman[75609]: 2025-11-29 06:16:33.845439143 +0000 UTC m=+0.164428176 container init a8bc4e2ebd7836cd1f1a3c01e044b9d8fa32d1d7561fe2c337124f2556bd8cd7 (image=quay.io/ceph/ceph:v18, name=vigilant_turing, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 06:16:33 compute-0 podman[75609]: 2025-11-29 06:16:33.855741975 +0000 UTC m=+0.174730948 container start a8bc4e2ebd7836cd1f1a3c01e044b9d8fa32d1d7561fe2c337124f2556bd8cd7 (image=quay.io/ceph/ceph:v18, name=vigilant_turing, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 06:16:33 compute-0 podman[75609]: 2025-11-29 06:16:33.86052259 +0000 UTC m=+0.179511563 container attach a8bc4e2ebd7836cd1f1a3c01e044b9d8fa32d1d7561fe2c337124f2556bd8cd7 (image=quay.io/ceph/ceph:v18, name=vigilant_turing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 29 06:16:34 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "cephadm"} v 0) v1
Nov 29 06:16:34 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2380306659' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Nov 29 06:16:34 compute-0 ceph-mon[74654]: from='client.? 192.168.122.100:0/2380306659' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Nov 29 06:16:34 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2380306659' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Nov 29 06:16:34 compute-0 ceph-mgr[74948]: mgr handle_mgr_map respawning because set of enabled modules changed!
Nov 29 06:16:34 compute-0 ceph-mgr[74948]: mgr respawn  e: '/usr/bin/ceph-mgr'
Nov 29 06:16:34 compute-0 ceph-mgr[74948]: mgr respawn  0: '/usr/bin/ceph-mgr'
Nov 29 06:16:34 compute-0 ceph-mgr[74948]: mgr respawn  1: '-n'
Nov 29 06:16:34 compute-0 ceph-mgr[74948]: mgr respawn  2: 'mgr.compute-0.vxabpq'
Nov 29 06:16:34 compute-0 ceph-mgr[74948]: mgr respawn  3: '-f'
Nov 29 06:16:34 compute-0 ceph-mgr[74948]: mgr respawn  4: '--setuser'
Nov 29 06:16:34 compute-0 ceph-mgr[74948]: mgr respawn  5: 'ceph'
Nov 29 06:16:34 compute-0 ceph-mgr[74948]: mgr respawn  6: '--setgroup'
Nov 29 06:16:34 compute-0 ceph-mgr[74948]: mgr respawn  7: 'ceph'
Nov 29 06:16:34 compute-0 ceph-mgr[74948]: mgr respawn  8: '--default-log-to-file=false'
Nov 29 06:16:34 compute-0 ceph-mgr[74948]: mgr respawn  9: '--default-log-to-journald=true'
Nov 29 06:16:34 compute-0 ceph-mgr[74948]: mgr respawn  10: '--default-log-to-stderr=false'
Nov 29 06:16:34 compute-0 ceph-mgr[74948]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Nov 29 06:16:34 compute-0 ceph-mgr[74948]: mgr respawn  exe_path /proc/self/exe
Nov 29 06:16:34 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : mgrmap e4: compute-0.vxabpq(active, since 6s)
Nov 29 06:16:34 compute-0 systemd[1]: libpod-a8bc4e2ebd7836cd1f1a3c01e044b9d8fa32d1d7561fe2c337124f2556bd8cd7.scope: Deactivated successfully.
Nov 29 06:16:34 compute-0 podman[75609]: 2025-11-29 06:16:34.895403479 +0000 UTC m=+1.214392432 container died a8bc4e2ebd7836cd1f1a3c01e044b9d8fa32d1d7561fe2c337124f2556bd8cd7 (image=quay.io/ceph/ceph:v18, name=vigilant_turing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True)
Nov 29 06:16:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-a38b228cb168e92ee01a172477e02817d378a4fa99ef7fd48255bd9c0462bb38-merged.mount: Deactivated successfully.
Nov 29 06:16:34 compute-0 podman[75609]: 2025-11-29 06:16:34.950358051 +0000 UTC m=+1.269346994 container remove a8bc4e2ebd7836cd1f1a3c01e044b9d8fa32d1d7561fe2c337124f2556bd8cd7 (image=quay.io/ceph/ceph:v18, name=vigilant_turing, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 06:16:34 compute-0 systemd[1]: libpod-conmon-a8bc4e2ebd7836cd1f1a3c01e044b9d8fa32d1d7561fe2c337124f2556bd8cd7.scope: Deactivated successfully.
Nov 29 06:16:34 compute-0 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: ignoring --setuser ceph since I am not root
Nov 29 06:16:34 compute-0 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: ignoring --setgroup ceph since I am not root
Nov 29 06:16:34 compute-0 ceph-mgr[74948]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Nov 29 06:16:34 compute-0 ceph-mgr[74948]: pidfile_write: ignore empty --pid-file
Nov 29 06:16:35 compute-0 podman[75666]: 2025-11-29 06:16:35.007719652 +0000 UTC m=+0.036848842 container create 35d86aaa2001a400f0bdafddea47d26ba9b2e7b09541f0e33defe94f2c4a3eba (image=quay.io/ceph/ceph:v18, name=nostalgic_wilbur, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 29 06:16:35 compute-0 systemd[1]: Started libpod-conmon-35d86aaa2001a400f0bdafddea47d26ba9b2e7b09541f0e33defe94f2c4a3eba.scope.
Nov 29 06:16:35 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:16:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c5a3a0704adaa95f045a0aec7b480cec4098c03ff111adae85ea86db8380e3b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:16:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c5a3a0704adaa95f045a0aec7b480cec4098c03ff111adae85ea86db8380e3b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 06:16:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c5a3a0704adaa95f045a0aec7b480cec4098c03ff111adae85ea86db8380e3b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:16:35 compute-0 podman[75666]: 2025-11-29 06:16:35.073427738 +0000 UTC m=+0.102556938 container init 35d86aaa2001a400f0bdafddea47d26ba9b2e7b09541f0e33defe94f2c4a3eba (image=quay.io/ceph/ceph:v18, name=nostalgic_wilbur, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 29 06:16:35 compute-0 podman[75666]: 2025-11-29 06:16:35.08373957 +0000 UTC m=+0.112868760 container start 35d86aaa2001a400f0bdafddea47d26ba9b2e7b09541f0e33defe94f2c4a3eba (image=quay.io/ceph/ceph:v18, name=nostalgic_wilbur, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 06:16:35 compute-0 podman[75666]: 2025-11-29 06:16:34.992730038 +0000 UTC m=+0.021859248 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 06:16:35 compute-0 podman[75666]: 2025-11-29 06:16:35.088155004 +0000 UTC m=+0.117284194 container attach 35d86aaa2001a400f0bdafddea47d26ba9b2e7b09541f0e33defe94f2c4a3eba (image=quay.io/ceph/ceph:v18, name=nostalgic_wilbur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 29 06:16:35 compute-0 ceph-mgr[74948]: mgr[py] Loading python module 'alerts'
Nov 29 06:16:35 compute-0 ceph-mgr[74948]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 29 06:16:35 compute-0 ceph-mgr[74948]: mgr[py] Loading python module 'balancer'
Nov 29 06:16:35 compute-0 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: 2025-11-29T06:16:35.408+0000 7f91542c8140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 29 06:16:35 compute-0 ceph-mgr[74948]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 29 06:16:35 compute-0 ceph-mgr[74948]: mgr[py] Loading python module 'cephadm'
Nov 29 06:16:35 compute-0 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: 2025-11-29T06:16:35.669+0000 7f91542c8140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 29 06:16:35 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Nov 29 06:16:35 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3397641018' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 29 06:16:35 compute-0 nostalgic_wilbur[75706]: {
Nov 29 06:16:35 compute-0 nostalgic_wilbur[75706]:     "epoch": 4,
Nov 29 06:16:35 compute-0 nostalgic_wilbur[75706]:     "available": true,
Nov 29 06:16:35 compute-0 nostalgic_wilbur[75706]:     "active_name": "compute-0.vxabpq",
Nov 29 06:16:35 compute-0 nostalgic_wilbur[75706]:     "num_standby": 0
Nov 29 06:16:35 compute-0 nostalgic_wilbur[75706]: }
Nov 29 06:16:35 compute-0 systemd[1]: libpod-35d86aaa2001a400f0bdafddea47d26ba9b2e7b09541f0e33defe94f2c4a3eba.scope: Deactivated successfully.
Nov 29 06:16:35 compute-0 conmon[75706]: conmon 35d86aaa2001a400f0bd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-35d86aaa2001a400f0bdafddea47d26ba9b2e7b09541f0e33defe94f2c4a3eba.scope/container/memory.events
Nov 29 06:16:35 compute-0 podman[75666]: 2025-11-29 06:16:35.71657287 +0000 UTC m=+0.745702100 container died 35d86aaa2001a400f0bdafddea47d26ba9b2e7b09541f0e33defe94f2c4a3eba (image=quay.io/ceph/ceph:v18, name=nostalgic_wilbur, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 29 06:16:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-5c5a3a0704adaa95f045a0aec7b480cec4098c03ff111adae85ea86db8380e3b-merged.mount: Deactivated successfully.
Nov 29 06:16:35 compute-0 podman[75666]: 2025-11-29 06:16:35.770449832 +0000 UTC m=+0.799579022 container remove 35d86aaa2001a400f0bdafddea47d26ba9b2e7b09541f0e33defe94f2c4a3eba (image=quay.io/ceph/ceph:v18, name=nostalgic_wilbur, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 06:16:35 compute-0 systemd[1]: libpod-conmon-35d86aaa2001a400f0bdafddea47d26ba9b2e7b09541f0e33defe94f2c4a3eba.scope: Deactivated successfully.
Nov 29 06:16:35 compute-0 podman[75744]: 2025-11-29 06:16:35.848822706 +0000 UTC m=+0.048491651 container create b3b4a9df478f449d160df1983659cfd9365d411f9112a902bf37bde390b1fa73 (image=quay.io/ceph/ceph:v18, name=infallible_mahavira, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 06:16:35 compute-0 ceph-mon[74654]: from='client.? 192.168.122.100:0/2380306659' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Nov 29 06:16:35 compute-0 ceph-mon[74654]: mgrmap e4: compute-0.vxabpq(active, since 6s)
Nov 29 06:16:35 compute-0 ceph-mon[74654]: from='client.? 192.168.122.100:0/3397641018' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 29 06:16:35 compute-0 systemd[1]: Started libpod-conmon-b3b4a9df478f449d160df1983659cfd9365d411f9112a902bf37bde390b1fa73.scope.
Nov 29 06:16:35 compute-0 podman[75744]: 2025-11-29 06:16:35.827341349 +0000 UTC m=+0.027010274 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 06:16:35 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:16:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e52f745a80b1de5e4082a73bb90c0f33fce9bf44fe0d8a3b3f125f872d688093/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 06:16:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e52f745a80b1de5e4082a73bb90c0f33fce9bf44fe0d8a3b3f125f872d688093/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:16:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e52f745a80b1de5e4082a73bb90c0f33fce9bf44fe0d8a3b3f125f872d688093/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:16:36 compute-0 podman[75744]: 2025-11-29 06:16:36.006679696 +0000 UTC m=+0.206348601 container init b3b4a9df478f449d160df1983659cfd9365d411f9112a902bf37bde390b1fa73 (image=quay.io/ceph/ceph:v18, name=infallible_mahavira, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 06:16:36 compute-0 podman[75744]: 2025-11-29 06:16:36.016799322 +0000 UTC m=+0.216468267 container start b3b4a9df478f449d160df1983659cfd9365d411f9112a902bf37bde390b1fa73 (image=quay.io/ceph/ceph:v18, name=infallible_mahavira, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 06:16:36 compute-0 podman[75744]: 2025-11-29 06:16:36.020922209 +0000 UTC m=+0.220591154 container attach b3b4a9df478f449d160df1983659cfd9365d411f9112a902bf37bde390b1fa73 (image=quay.io/ceph/ceph:v18, name=infallible_mahavira, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 29 06:16:36 compute-0 sshd-session[75651]: Invalid user in from 103.147.159.91 port 52226
Nov 29 06:16:36 compute-0 sshd-session[75651]: Received disconnect from 103.147.159.91 port 52226:11: Bye Bye [preauth]
Nov 29 06:16:36 compute-0 sshd-session[75651]: Disconnected from invalid user in 103.147.159.91 port 52226 [preauth]
Nov 29 06:16:37 compute-0 ceph-mgr[74948]: mgr[py] Loading python module 'crash'
Nov 29 06:16:37 compute-0 ceph-mgr[74948]: mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 29 06:16:37 compute-0 ceph-mgr[74948]: mgr[py] Loading python module 'dashboard'
Nov 29 06:16:37 compute-0 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: 2025-11-29T06:16:37.853+0000 7f91542c8140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 29 06:16:39 compute-0 ceph-mgr[74948]: mgr[py] Loading python module 'devicehealth'
Nov 29 06:16:39 compute-0 ceph-mgr[74948]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 29 06:16:39 compute-0 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: 2025-11-29T06:16:39.522+0000 7f91542c8140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 29 06:16:39 compute-0 ceph-mgr[74948]: mgr[py] Loading python module 'diskprediction_local'
Nov 29 06:16:40 compute-0 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Nov 29 06:16:40 compute-0 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Nov 29 06:16:40 compute-0 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]:   from numpy import show_config as show_numpy_config
Nov 29 06:16:40 compute-0 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: 2025-11-29T06:16:40.031+0000 7f91542c8140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 29 06:16:40 compute-0 ceph-mgr[74948]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 29 06:16:40 compute-0 ceph-mgr[74948]: mgr[py] Loading python module 'influx'
Nov 29 06:16:40 compute-0 ceph-mgr[74948]: mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 29 06:16:40 compute-0 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: 2025-11-29T06:16:40.268+0000 7f91542c8140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 29 06:16:40 compute-0 ceph-mgr[74948]: mgr[py] Loading python module 'insights'
Nov 29 06:16:40 compute-0 ceph-mgr[74948]: mgr[py] Loading python module 'iostat'
Nov 29 06:16:40 compute-0 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: 2025-11-29T06:16:40.742+0000 7f91542c8140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 29 06:16:40 compute-0 ceph-mgr[74948]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 29 06:16:40 compute-0 ceph-mgr[74948]: mgr[py] Loading python module 'k8sevents'
Nov 29 06:16:42 compute-0 ceph-mgr[74948]: mgr[py] Loading python module 'localpool'
Nov 29 06:16:42 compute-0 ceph-mgr[74948]: mgr[py] Loading python module 'mds_autoscaler'
Nov 29 06:16:43 compute-0 ceph-mgr[74948]: mgr[py] Loading python module 'mirroring'
Nov 29 06:16:43 compute-0 ceph-mgr[74948]: mgr[py] Loading python module 'nfs'
Nov 29 06:16:44 compute-0 ceph-mgr[74948]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 29 06:16:44 compute-0 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: 2025-11-29T06:16:44.320+0000 7f91542c8140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 29 06:16:44 compute-0 ceph-mgr[74948]: mgr[py] Loading python module 'orchestrator'
Nov 29 06:16:44 compute-0 ceph-mgr[74948]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 29 06:16:44 compute-0 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: 2025-11-29T06:16:44.990+0000 7f91542c8140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 29 06:16:44 compute-0 ceph-mgr[74948]: mgr[py] Loading python module 'osd_perf_query'
Nov 29 06:16:45 compute-0 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: 2025-11-29T06:16:45.236+0000 7f91542c8140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 29 06:16:45 compute-0 ceph-mgr[74948]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 29 06:16:45 compute-0 ceph-mgr[74948]: mgr[py] Loading python module 'osd_support'
Nov 29 06:16:45 compute-0 ceph-mgr[74948]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 29 06:16:45 compute-0 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: 2025-11-29T06:16:45.448+0000 7f91542c8140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 29 06:16:45 compute-0 ceph-mgr[74948]: mgr[py] Loading python module 'pg_autoscaler'
Nov 29 06:16:45 compute-0 ceph-mgr[74948]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 29 06:16:45 compute-0 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: 2025-11-29T06:16:45.707+0000 7f91542c8140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 29 06:16:45 compute-0 ceph-mgr[74948]: mgr[py] Loading python module 'progress'
Nov 29 06:16:45 compute-0 ceph-mgr[74948]: mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 29 06:16:45 compute-0 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: 2025-11-29T06:16:45.926+0000 7f91542c8140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 29 06:16:45 compute-0 ceph-mgr[74948]: mgr[py] Loading python module 'prometheus'
Nov 29 06:16:46 compute-0 ceph-mgr[74948]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 29 06:16:46 compute-0 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: 2025-11-29T06:16:46.861+0000 7f91542c8140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 29 06:16:46 compute-0 ceph-mgr[74948]: mgr[py] Loading python module 'rbd_support'
Nov 29 06:16:47 compute-0 ceph-mgr[74948]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 29 06:16:47 compute-0 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: 2025-11-29T06:16:47.167+0000 7f91542c8140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 29 06:16:47 compute-0 ceph-mgr[74948]: mgr[py] Loading python module 'restful'
Nov 29 06:16:47 compute-0 sshd-session[75795]: Invalid user support from 104.208.108.166 port 20282
Nov 29 06:16:47 compute-0 ceph-mgr[74948]: mgr[py] Loading python module 'rgw'
Nov 29 06:16:47 compute-0 sshd-session[75795]: Received disconnect from 104.208.108.166 port 20282:11: Bye Bye [preauth]
Nov 29 06:16:47 compute-0 sshd-session[75795]: Disconnected from invalid user support 104.208.108.166 port 20282 [preauth]
Nov 29 06:16:48 compute-0 ceph-mgr[74948]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 29 06:16:48 compute-0 ceph-mgr[74948]: mgr[py] Loading python module 'rook'
Nov 29 06:16:48 compute-0 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: 2025-11-29T06:16:48.611+0000 7f91542c8140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 29 06:16:50 compute-0 ceph-mgr[74948]: mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 29 06:16:50 compute-0 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: 2025-11-29T06:16:50.683+0000 7f91542c8140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 29 06:16:50 compute-0 ceph-mgr[74948]: mgr[py] Loading python module 'selftest'
Nov 29 06:16:50 compute-0 ceph-mgr[74948]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 29 06:16:50 compute-0 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: 2025-11-29T06:16:50.915+0000 7f91542c8140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 29 06:16:50 compute-0 ceph-mgr[74948]: mgr[py] Loading python module 'snap_schedule'
Nov 29 06:16:51 compute-0 ceph-mgr[74948]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 29 06:16:51 compute-0 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: 2025-11-29T06:16:51.154+0000 7f91542c8140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 29 06:16:51 compute-0 ceph-mgr[74948]: mgr[py] Loading python module 'stats'
Nov 29 06:16:51 compute-0 ceph-mgr[74948]: mgr[py] Loading python module 'status'
Nov 29 06:16:51 compute-0 ceph-mgr[74948]: mgr[py] Module status has missing NOTIFY_TYPES member
Nov 29 06:16:51 compute-0 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: 2025-11-29T06:16:51.656+0000 7f91542c8140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Nov 29 06:16:51 compute-0 ceph-mgr[74948]: mgr[py] Loading python module 'telegraf'
Nov 29 06:16:51 compute-0 ceph-mgr[74948]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 29 06:16:51 compute-0 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: 2025-11-29T06:16:51.894+0000 7f91542c8140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 29 06:16:51 compute-0 ceph-mgr[74948]: mgr[py] Loading python module 'telemetry'
Nov 29 06:16:52 compute-0 ceph-mgr[74948]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 29 06:16:52 compute-0 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: 2025-11-29T06:16:52.508+0000 7f91542c8140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 29 06:16:52 compute-0 ceph-mgr[74948]: mgr[py] Loading python module 'test_orchestrator'
Nov 29 06:16:53 compute-0 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: 2025-11-29T06:16:53.181+0000 7f91542c8140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 29 06:16:53 compute-0 ceph-mgr[74948]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 29 06:16:53 compute-0 ceph-mgr[74948]: mgr[py] Loading python module 'volumes'
Nov 29 06:16:53 compute-0 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: 2025-11-29T06:16:53.907+0000 7f91542c8140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 29 06:16:53 compute-0 ceph-mgr[74948]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 29 06:16:53 compute-0 ceph-mgr[74948]: mgr[py] Loading python module 'zabbix'
Nov 29 06:16:54 compute-0 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: 2025-11-29T06:16:54.137+0000 7f91542c8140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 29 06:16:54 compute-0 ceph-mgr[74948]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 29 06:16:54 compute-0 ceph-mon[74654]: log_channel(cluster) log [INF] : Active manager daemon compute-0.vxabpq restarted
Nov 29 06:16:54 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e1 do_prune osdmap full prune enabled
Nov 29 06:16:54 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e1 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 29 06:16:54 compute-0 ceph-mon[74654]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.vxabpq
Nov 29 06:16:54 compute-0 ceph-mgr[74948]: ms_deliver_dispatch: unhandled message 0x5648794d0420 mon_map magic: 0 v1 from mon.0 v2:192.168.122.100:3300/0
Nov 29 06:16:54 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e1 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Nov 29 06:16:54 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e1 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Nov 29 06:16:54 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e2 e2: 0 total, 0 up, 0 in
Nov 29 06:16:54 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e2: 0 total, 0 up, 0 in
Nov 29 06:16:54 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : mgrmap e5: compute-0.vxabpq(active, starting, since 0.0144217s)
Nov 29 06:16:54 compute-0 ceph-mgr[74948]: mgr handle_mgr_map Activating!
Nov 29 06:16:54 compute-0 ceph-mgr[74948]: mgr handle_mgr_map I am now activating
Nov 29 06:16:54 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Nov 29 06:16:54 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 29 06:16:54 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.vxabpq", "id": "compute-0.vxabpq"} v 0) v1
Nov 29 06:16:54 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "mgr metadata", "who": "compute-0.vxabpq", "id": "compute-0.vxabpq"}]: dispatch
Nov 29 06:16:54 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0) v1
Nov 29 06:16:54 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "mds metadata"}]: dispatch
Nov 29 06:16:54 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).mds e1 all = 1
Nov 29 06:16:54 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Nov 29 06:16:54 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 29 06:16:54 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0) v1
Nov 29 06:16:54 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "mon metadata"}]: dispatch
Nov 29 06:16:54 compute-0 ceph-mgr[74948]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 06:16:54 compute-0 ceph-mgr[74948]: mgr load Constructed class from module: balancer
Nov 29 06:16:54 compute-0 ceph-mgr[74948]: [balancer INFO root] Starting
Nov 29 06:16:54 compute-0 ceph-mon[74654]: log_channel(cluster) log [INF] : Manager daemon compute-0.vxabpq is now available
Nov 29 06:16:54 compute-0 ceph-mgr[74948]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 06:16:54 compute-0 ceph-mgr[74948]: [balancer INFO root] Optimize plan auto_2025-11-29_06:16:54
Nov 29 06:16:54 compute-0 ceph-mgr[74948]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 06:16:54 compute-0 ceph-mgr[74948]: [balancer INFO root] do_upmap
Nov 29 06:16:54 compute-0 ceph-mgr[74948]: [balancer INFO root] No pools available
Nov 29 06:16:54 compute-0 ceph-mon[74654]: Active manager daemon compute-0.vxabpq restarted
Nov 29 06:16:54 compute-0 ceph-mon[74654]: Activating manager daemon compute-0.vxabpq
Nov 29 06:16:54 compute-0 ceph-mon[74654]: osdmap e2: 0 total, 0 up, 0 in
Nov 29 06:16:54 compute-0 ceph-mon[74654]: mgrmap e5: compute-0.vxabpq(active, starting, since 0.0144217s)
Nov 29 06:16:54 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 29 06:16:54 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "mgr metadata", "who": "compute-0.vxabpq", "id": "compute-0.vxabpq"}]: dispatch
Nov 29 06:16:54 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "mds metadata"}]: dispatch
Nov 29 06:16:54 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 29 06:16:54 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "mon metadata"}]: dispatch
Nov 29 06:16:54 compute-0 ceph-mon[74654]: Manager daemon compute-0.vxabpq is now available
Nov 29 06:16:54 compute-0 ceph-mgr[74948]: [cephadm INFO cephadm.migrations] Found migration_current of "None". Setting to last migration.
Nov 29 06:16:54 compute-0 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Found migration_current of "None". Setting to last migration.
Nov 29 06:16:54 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/migration_current}] v 0) v1
Nov 29 06:16:54 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:16:54 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/config_checks}] v 0) v1
Nov 29 06:16:54 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:16:54 compute-0 ceph-mgr[74948]: mgr load Constructed class from module: cephadm
Nov 29 06:16:54 compute-0 ceph-mgr[74948]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 06:16:54 compute-0 ceph-mgr[74948]: mgr load Constructed class from module: crash
Nov 29 06:16:54 compute-0 ceph-mgr[74948]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 06:16:54 compute-0 ceph-mgr[74948]: mgr load Constructed class from module: devicehealth
Nov 29 06:16:54 compute-0 ceph-mgr[74948]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 06:16:54 compute-0 ceph-mgr[74948]: mgr load Constructed class from module: iostat
Nov 29 06:16:54 compute-0 ceph-mgr[74948]: [devicehealth INFO root] Starting
Nov 29 06:16:54 compute-0 ceph-mgr[74948]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 06:16:54 compute-0 ceph-mgr[74948]: mgr load Constructed class from module: nfs
Nov 29 06:16:54 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 29 06:16:54 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 29 06:16:54 compute-0 ceph-mgr[74948]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 06:16:54 compute-0 ceph-mgr[74948]: mgr load Constructed class from module: orchestrator
Nov 29 06:16:54 compute-0 ceph-mgr[74948]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 06:16:54 compute-0 ceph-mgr[74948]: mgr load Constructed class from module: pg_autoscaler
Nov 29 06:16:54 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 29 06:16:54 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 29 06:16:54 compute-0 ceph-mgr[74948]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 06:16:54 compute-0 ceph-mgr[74948]: mgr load Constructed class from module: progress
Nov 29 06:16:54 compute-0 ceph-mgr[74948]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 06:16:54 compute-0 ceph-mgr[74948]: [progress INFO root] Loading...
Nov 29 06:16:54 compute-0 ceph-mgr[74948]: [progress INFO root] No stored events to load
Nov 29 06:16:54 compute-0 ceph-mgr[74948]: [progress INFO root] Loaded [] historic events
Nov 29 06:16:54 compute-0 ceph-mgr[74948]: [progress INFO root] Loaded OSDMap, ready.
Nov 29 06:16:54 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 06:16:54 compute-0 ceph-mgr[74948]: [rbd_support INFO root] recovery thread starting
Nov 29 06:16:54 compute-0 ceph-mgr[74948]: [rbd_support INFO root] starting setup
Nov 29 06:16:54 compute-0 ceph-mgr[74948]: mgr load Constructed class from module: rbd_support
Nov 29 06:16:54 compute-0 ceph-mgr[74948]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 06:16:54 compute-0 ceph-mgr[74948]: mgr load Constructed class from module: restful
Nov 29 06:16:54 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.vxabpq/mirror_snapshot_schedule"} v 0) v1
Nov 29 06:16:54 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.vxabpq/mirror_snapshot_schedule"}]: dispatch
Nov 29 06:16:54 compute-0 ceph-mgr[74948]: [restful INFO root] server_addr: :: server_port: 8003
Nov 29 06:16:54 compute-0 ceph-mgr[74948]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 06:16:54 compute-0 ceph-mgr[74948]: mgr load Constructed class from module: status
Nov 29 06:16:54 compute-0 ceph-mgr[74948]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 06:16:54 compute-0 ceph-mgr[74948]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Nov 29 06:16:54 compute-0 ceph-mgr[74948]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 06:16:54 compute-0 ceph-mgr[74948]: [rbd_support INFO root] PerfHandler: starting
Nov 29 06:16:54 compute-0 ceph-mgr[74948]: [rbd_support INFO root] TaskHandler: starting
Nov 29 06:16:54 compute-0 ceph-mgr[74948]: mgr load Constructed class from module: telemetry
Nov 29 06:16:54 compute-0 ceph-mgr[74948]: [restful WARNING root] server not running: no certificate configured
Nov 29 06:16:54 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.vxabpq/trash_purge_schedule"} v 0) v1
Nov 29 06:16:54 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.vxabpq/trash_purge_schedule"}]: dispatch
Nov 29 06:16:54 compute-0 ceph-mgr[74948]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 06:16:54 compute-0 ceph-mgr[74948]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 06:16:54 compute-0 ceph-mgr[74948]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Nov 29 06:16:54 compute-0 ceph-mgr[74948]: [rbd_support INFO root] setup complete
Nov 29 06:16:54 compute-0 ceph-mgr[74948]: mgr load Constructed class from module: volumes
Nov 29 06:16:54 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cephadm_agent/root/cert}] v 0) v1
Nov 29 06:16:54 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:16:54 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cephadm_agent/root/key}] v 0) v1
Nov 29 06:16:54 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:16:55 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : mgrmap e6: compute-0.vxabpq(active, since 1.02504s)
Nov 29 06:16:55 compute-0 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.14136 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Nov 29 06:16:55 compute-0 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.14136 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Nov 29 06:16:55 compute-0 infallible_mahavira[75760]: {
Nov 29 06:16:55 compute-0 infallible_mahavira[75760]:     "mgrmap_epoch": 6,
Nov 29 06:16:55 compute-0 infallible_mahavira[75760]:     "initialized": true
Nov 29 06:16:55 compute-0 infallible_mahavira[75760]: }
Nov 29 06:16:55 compute-0 systemd[1]: libpod-b3b4a9df478f449d160df1983659cfd9365d411f9112a902bf37bde390b1fa73.scope: Deactivated successfully.
Nov 29 06:16:55 compute-0 conmon[75760]: conmon b3b4a9df478f449d160d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b3b4a9df478f449d160df1983659cfd9365d411f9112a902bf37bde390b1fa73.scope/container/memory.events
Nov 29 06:16:55 compute-0 podman[75744]: 2025-11-29 06:16:55.195836845 +0000 UTC m=+19.395505780 container died b3b4a9df478f449d160df1983659cfd9365d411f9112a902bf37bde390b1fa73 (image=quay.io/ceph/ceph:v18, name=infallible_mahavira, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 29 06:16:55 compute-0 ceph-mon[74654]: Found migration_current of "None". Setting to last migration.
Nov 29 06:16:55 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:16:55 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:16:55 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 29 06:16:55 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 29 06:16:55 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.vxabpq/mirror_snapshot_schedule"}]: dispatch
Nov 29 06:16:55 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.vxabpq/trash_purge_schedule"}]: dispatch
Nov 29 06:16:55 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:16:55 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:16:55 compute-0 ceph-mon[74654]: mgrmap e6: compute-0.vxabpq(active, since 1.02504s)
Nov 29 06:16:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-e52f745a80b1de5e4082a73bb90c0f33fce9bf44fe0d8a3b3f125f872d688093-merged.mount: Deactivated successfully.
Nov 29 06:16:55 compute-0 podman[75744]: 2025-11-29 06:16:55.252803895 +0000 UTC m=+19.452472820 container remove b3b4a9df478f449d160df1983659cfd9365d411f9112a902bf37bde390b1fa73 (image=quay.io/ceph/ceph:v18, name=infallible_mahavira, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 29 06:16:55 compute-0 systemd[1]: libpod-conmon-b3b4a9df478f449d160df1983659cfd9365d411f9112a902bf37bde390b1fa73.scope: Deactivated successfully.
Nov 29 06:16:55 compute-0 podman[75922]: 2025-11-29 06:16:55.339447903 +0000 UTC m=+0.055181490 container create 85b48ca7af23a7068f1df5a6ae1889d7e37d4455364c394ba4fc5b1b49f83023 (image=quay.io/ceph/ceph:v18, name=priceless_margulis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 06:16:55 compute-0 systemd[1]: Started libpod-conmon-85b48ca7af23a7068f1df5a6ae1889d7e37d4455364c394ba4fc5b1b49f83023.scope.
Nov 29 06:16:55 compute-0 ceph-mgr[74948]: [cephadm INFO cherrypy.error] [29/Nov/2025:06:16:55] ENGINE Bus STARTING
Nov 29 06:16:55 compute-0 ceph-mgr[74948]: log_channel(cephadm) log [INF] : [29/Nov/2025:06:16:55] ENGINE Bus STARTING
Nov 29 06:16:55 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:16:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/901e1c1d63fca62ba9112816060a8643375a3a6d8c4fedb26b79f347ca36ba73/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 06:16:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/901e1c1d63fca62ba9112816060a8643375a3a6d8c4fedb26b79f347ca36ba73/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:16:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/901e1c1d63fca62ba9112816060a8643375a3a6d8c4fedb26b79f347ca36ba73/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:16:55 compute-0 podman[75922]: 2025-11-29 06:16:55.320566329 +0000 UTC m=+0.036299956 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 06:16:55 compute-0 podman[75922]: 2025-11-29 06:16:55.422394546 +0000 UTC m=+0.138128233 container init 85b48ca7af23a7068f1df5a6ae1889d7e37d4455364c394ba4fc5b1b49f83023 (image=quay.io/ceph/ceph:v18, name=priceless_margulis, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 06:16:55 compute-0 podman[75922]: 2025-11-29 06:16:55.432170913 +0000 UTC m=+0.147904500 container start 85b48ca7af23a7068f1df5a6ae1889d7e37d4455364c394ba4fc5b1b49f83023 (image=quay.io/ceph/ceph:v18, name=priceless_margulis, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 06:16:55 compute-0 podman[75922]: 2025-11-29 06:16:55.435929919 +0000 UTC m=+0.151663536 container attach 85b48ca7af23a7068f1df5a6ae1889d7e37d4455364c394ba4fc5b1b49f83023 (image=quay.io/ceph/ceph:v18, name=priceless_margulis, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 29 06:16:55 compute-0 ceph-mgr[74948]: [cephadm INFO cherrypy.error] [29/Nov/2025:06:16:55] ENGINE Serving on http://192.168.122.100:8765
Nov 29 06:16:55 compute-0 ceph-mgr[74948]: log_channel(cephadm) log [INF] : [29/Nov/2025:06:16:55] ENGINE Serving on http://192.168.122.100:8765
Nov 29 06:16:55 compute-0 ceph-mgr[74948]: [cephadm INFO cherrypy.error] [29/Nov/2025:06:16:55] ENGINE Serving on https://192.168.122.100:7150
Nov 29 06:16:55 compute-0 ceph-mgr[74948]: log_channel(cephadm) log [INF] : [29/Nov/2025:06:16:55] ENGINE Serving on https://192.168.122.100:7150
Nov 29 06:16:55 compute-0 ceph-mgr[74948]: [cephadm INFO cherrypy.error] [29/Nov/2025:06:16:55] ENGINE Bus STARTED
Nov 29 06:16:55 compute-0 ceph-mgr[74948]: log_channel(cephadm) log [INF] : [29/Nov/2025:06:16:55] ENGINE Bus STARTED
Nov 29 06:16:55 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 29 06:16:55 compute-0 ceph-mgr[74948]: [cephadm INFO cherrypy.error] [29/Nov/2025:06:16:55] ENGINE Client ('192.168.122.100', 59988) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 29 06:16:55 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 29 06:16:55 compute-0 ceph-mgr[74948]: log_channel(cephadm) log [INF] : [29/Nov/2025:06:16:55] ENGINE Client ('192.168.122.100', 59988) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 29 06:16:55 compute-0 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.14144 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 06:16:55 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/orchestrator/orchestrator}] v 0) v1
Nov 29 06:16:55 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:16:55 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 29 06:16:55 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 29 06:16:55 compute-0 systemd[1]: libpod-85b48ca7af23a7068f1df5a6ae1889d7e37d4455364c394ba4fc5b1b49f83023.scope: Deactivated successfully.
Nov 29 06:16:55 compute-0 podman[75922]: 2025-11-29 06:16:55.997808353 +0000 UTC m=+0.713541980 container died 85b48ca7af23a7068f1df5a6ae1889d7e37d4455364c394ba4fc5b1b49f83023 (image=quay.io/ceph/ceph:v18, name=priceless_margulis, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 06:16:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-901e1c1d63fca62ba9112816060a8643375a3a6d8c4fedb26b79f347ca36ba73-merged.mount: Deactivated successfully.
Nov 29 06:16:56 compute-0 podman[75922]: 2025-11-29 06:16:56.044820332 +0000 UTC m=+0.760553919 container remove 85b48ca7af23a7068f1df5a6ae1889d7e37d4455364c394ba4fc5b1b49f83023 (image=quay.io/ceph/ceph:v18, name=priceless_margulis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 06:16:56 compute-0 systemd[1]: libpod-conmon-85b48ca7af23a7068f1df5a6ae1889d7e37d4455364c394ba4fc5b1b49f83023.scope: Deactivated successfully.
Nov 29 06:16:56 compute-0 podman[76001]: 2025-11-29 06:16:56.097606883 +0000 UTC m=+0.033135147 container create df05b91d7dbc67a5aa87d2095e0128adc0d17a018e09db22d44282f11b05427c (image=quay.io/ceph/ceph:v18, name=goofy_shaw, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True)
Nov 29 06:16:56 compute-0 systemd[1]: Started libpod-conmon-df05b91d7dbc67a5aa87d2095e0128adc0d17a018e09db22d44282f11b05427c.scope.
Nov 29 06:16:56 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:16:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6eea2c5bdbd42279e57f44f781da5b026d9a1001fb6c82445af0dab5531054a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:16:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6eea2c5bdbd42279e57f44f781da5b026d9a1001fb6c82445af0dab5531054a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:16:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6eea2c5bdbd42279e57f44f781da5b026d9a1001fb6c82445af0dab5531054a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 06:16:56 compute-0 ceph-mgr[74948]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 29 06:16:56 compute-0 podman[76001]: 2025-11-29 06:16:56.171495511 +0000 UTC m=+0.107023805 container init df05b91d7dbc67a5aa87d2095e0128adc0d17a018e09db22d44282f11b05427c (image=quay.io/ceph/ceph:v18, name=goofy_shaw, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 29 06:16:56 compute-0 podman[76001]: 2025-11-29 06:16:56.08368822 +0000 UTC m=+0.019216504 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 06:16:56 compute-0 podman[76001]: 2025-11-29 06:16:56.183562412 +0000 UTC m=+0.119090716 container start df05b91d7dbc67a5aa87d2095e0128adc0d17a018e09db22d44282f11b05427c (image=quay.io/ceph/ceph:v18, name=goofy_shaw, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 06:16:56 compute-0 podman[76001]: 2025-11-29 06:16:56.187693718 +0000 UTC m=+0.123222002 container attach df05b91d7dbc67a5aa87d2095e0128adc0d17a018e09db22d44282f11b05427c (image=quay.io/ceph/ceph:v18, name=goofy_shaw, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 06:16:56 compute-0 ceph-mon[74654]: from='client.14136 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Nov 29 06:16:56 compute-0 ceph-mon[74654]: from='client.14136 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Nov 29 06:16:56 compute-0 ceph-mon[74654]: [29/Nov/2025:06:16:55] ENGINE Bus STARTING
Nov 29 06:16:56 compute-0 ceph-mon[74654]: [29/Nov/2025:06:16:55] ENGINE Serving on http://192.168.122.100:8765
Nov 29 06:16:56 compute-0 ceph-mon[74654]: [29/Nov/2025:06:16:55] ENGINE Serving on https://192.168.122.100:7150
Nov 29 06:16:56 compute-0 ceph-mon[74654]: [29/Nov/2025:06:16:55] ENGINE Bus STARTED
Nov 29 06:16:56 compute-0 ceph-mon[74654]: [29/Nov/2025:06:16:55] ENGINE Client ('192.168.122.100', 59988) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 29 06:16:56 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 29 06:16:56 compute-0 ceph-mon[74654]: from='client.14144 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 06:16:56 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:16:56 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 29 06:16:56 compute-0 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.14146 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 06:16:56 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_user}] v 0) v1
Nov 29 06:16:56 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:16:56 compute-0 ceph-mgr[74948]: [cephadm INFO root] Set ssh ssh_user
Nov 29 06:16:56 compute-0 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Set ssh ssh_user
Nov 29 06:16:56 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_config}] v 0) v1
Nov 29 06:16:56 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:16:56 compute-0 ceph-mgr[74948]: [cephadm INFO root] Set ssh ssh_config
Nov 29 06:16:56 compute-0 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Set ssh ssh_config
Nov 29 06:16:56 compute-0 ceph-mgr[74948]: [cephadm INFO root] ssh user set to ceph-admin. sudo will be used
Nov 29 06:16:56 compute-0 ceph-mgr[74948]: log_channel(cephadm) log [INF] : ssh user set to ceph-admin. sudo will be used
Nov 29 06:16:56 compute-0 goofy_shaw[76018]: ssh user set to ceph-admin. sudo will be used
Nov 29 06:16:56 compute-0 systemd[1]: libpod-df05b91d7dbc67a5aa87d2095e0128adc0d17a018e09db22d44282f11b05427c.scope: Deactivated successfully.
Nov 29 06:16:56 compute-0 podman[76001]: 2025-11-29 06:16:56.739014025 +0000 UTC m=+0.674542299 container died df05b91d7dbc67a5aa87d2095e0128adc0d17a018e09db22d44282f11b05427c (image=quay.io/ceph/ceph:v18, name=goofy_shaw, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 29 06:16:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-c6eea2c5bdbd42279e57f44f781da5b026d9a1001fb6c82445af0dab5531054a-merged.mount: Deactivated successfully.
Nov 29 06:16:56 compute-0 podman[76001]: 2025-11-29 06:16:56.783529703 +0000 UTC m=+0.719057967 container remove df05b91d7dbc67a5aa87d2095e0128adc0d17a018e09db22d44282f11b05427c (image=quay.io/ceph/ceph:v18, name=goofy_shaw, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Nov 29 06:16:56 compute-0 systemd[1]: libpod-conmon-df05b91d7dbc67a5aa87d2095e0128adc0d17a018e09db22d44282f11b05427c.scope: Deactivated successfully.
Nov 29 06:16:56 compute-0 podman[76054]: 2025-11-29 06:16:56.862992418 +0000 UTC m=+0.053590735 container create 2763117994d3bd24da35795c31b99aae7c4274a7ab807e22f4653032a3888490 (image=quay.io/ceph/ceph:v18, name=hardcore_aryabhata, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 29 06:16:56 compute-0 systemd[1]: Started libpod-conmon-2763117994d3bd24da35795c31b99aae7c4274a7ab807e22f4653032a3888490.scope.
Nov 29 06:16:56 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:16:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/734c5f4bd1d5f8cbed00994e410379acd4a5abc52711f26847ed3277ad3fd7c4/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Nov 29 06:16:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/734c5f4bd1d5f8cbed00994e410379acd4a5abc52711f26847ed3277ad3fd7c4/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Nov 29 06:16:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/734c5f4bd1d5f8cbed00994e410379acd4a5abc52711f26847ed3277ad3fd7c4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:16:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/734c5f4bd1d5f8cbed00994e410379acd4a5abc52711f26847ed3277ad3fd7c4/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 06:16:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/734c5f4bd1d5f8cbed00994e410379acd4a5abc52711f26847ed3277ad3fd7c4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:16:56 compute-0 podman[76054]: 2025-11-29 06:16:56.84394273 +0000 UTC m=+0.034541097 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 06:16:56 compute-0 podman[76054]: 2025-11-29 06:16:56.942711591 +0000 UTC m=+0.133309978 container init 2763117994d3bd24da35795c31b99aae7c4274a7ab807e22f4653032a3888490 (image=quay.io/ceph/ceph:v18, name=hardcore_aryabhata, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 06:16:56 compute-0 podman[76054]: 2025-11-29 06:16:56.953364292 +0000 UTC m=+0.143962609 container start 2763117994d3bd24da35795c31b99aae7c4274a7ab807e22f4653032a3888490 (image=quay.io/ceph/ceph:v18, name=hardcore_aryabhata, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 06:16:56 compute-0 podman[76054]: 2025-11-29 06:16:56.957052886 +0000 UTC m=+0.147651273 container attach 2763117994d3bd24da35795c31b99aae7c4274a7ab807e22f4653032a3888490 (image=quay.io/ceph/ceph:v18, name=hardcore_aryabhata, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 29 06:16:56 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : mgrmap e7: compute-0.vxabpq(active, since 2s)
Nov 29 06:16:57 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1019919563 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 06:16:57 compute-0 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.14148 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 06:16:57 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_key}] v 0) v1
Nov 29 06:16:57 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:16:57 compute-0 ceph-mgr[74948]: [cephadm INFO root] Set ssh ssh_identity_key
Nov 29 06:16:57 compute-0 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_key
Nov 29 06:16:57 compute-0 ceph-mgr[74948]: [cephadm INFO root] Set ssh private key
Nov 29 06:16:57 compute-0 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Set ssh private key
Nov 29 06:16:57 compute-0 systemd[1]: libpod-2763117994d3bd24da35795c31b99aae7c4274a7ab807e22f4653032a3888490.scope: Deactivated successfully.
Nov 29 06:16:57 compute-0 podman[76054]: 2025-11-29 06:16:57.492630498 +0000 UTC m=+0.683228845 container died 2763117994d3bd24da35795c31b99aae7c4274a7ab807e22f4653032a3888490 (image=quay.io/ceph/ceph:v18, name=hardcore_aryabhata, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 06:16:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-734c5f4bd1d5f8cbed00994e410379acd4a5abc52711f26847ed3277ad3fd7c4-merged.mount: Deactivated successfully.
Nov 29 06:16:57 compute-0 podman[76054]: 2025-11-29 06:16:57.617086955 +0000 UTC m=+0.807685282 container remove 2763117994d3bd24da35795c31b99aae7c4274a7ab807e22f4653032a3888490 (image=quay.io/ceph/ceph:v18, name=hardcore_aryabhata, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 06:16:57 compute-0 systemd[1]: libpod-conmon-2763117994d3bd24da35795c31b99aae7c4274a7ab807e22f4653032a3888490.scope: Deactivated successfully.
Nov 29 06:16:57 compute-0 ceph-mon[74654]: from='client.14146 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 06:16:57 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:16:57 compute-0 ceph-mon[74654]: Set ssh ssh_user
Nov 29 06:16:57 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:16:57 compute-0 ceph-mon[74654]: Set ssh ssh_config
Nov 29 06:16:57 compute-0 ceph-mon[74654]: ssh user set to ceph-admin. sudo will be used
Nov 29 06:16:57 compute-0 ceph-mon[74654]: mgrmap e7: compute-0.vxabpq(active, since 2s)
Nov 29 06:16:57 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:16:57 compute-0 podman[76111]: 2025-11-29 06:16:57.818122255 +0000 UTC m=+0.150823323 container create 9cb7ff071a0ac7275c01c024a4ee924e90b009397e5880c0d670d27b8cd96ff8 (image=quay.io/ceph/ceph:v18, name=happy_buck, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 06:16:57 compute-0 systemd[1]: Started libpod-conmon-9cb7ff071a0ac7275c01c024a4ee924e90b009397e5880c0d670d27b8cd96ff8.scope.
Nov 29 06:16:57 compute-0 podman[76111]: 2025-11-29 06:16:57.786692747 +0000 UTC m=+0.119393905 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 06:16:57 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:16:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbec127e9c8d2c9be1b995fe8bf8fc31ad906ce7f581d9a421eed389819f1290/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Nov 29 06:16:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbec127e9c8d2c9be1b995fe8bf8fc31ad906ce7f581d9a421eed389819f1290/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Nov 29 06:16:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbec127e9c8d2c9be1b995fe8bf8fc31ad906ce7f581d9a421eed389819f1290/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:16:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbec127e9c8d2c9be1b995fe8bf8fc31ad906ce7f581d9a421eed389819f1290/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:16:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbec127e9c8d2c9be1b995fe8bf8fc31ad906ce7f581d9a421eed389819f1290/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 06:16:57 compute-0 podman[76111]: 2025-11-29 06:16:57.921982649 +0000 UTC m=+0.254683737 container init 9cb7ff071a0ac7275c01c024a4ee924e90b009397e5880c0d670d27b8cd96ff8 (image=quay.io/ceph/ceph:v18, name=happy_buck, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 06:16:57 compute-0 podman[76111]: 2025-11-29 06:16:57.932378363 +0000 UTC m=+0.265079461 container start 9cb7ff071a0ac7275c01c024a4ee924e90b009397e5880c0d670d27b8cd96ff8 (image=quay.io/ceph/ceph:v18, name=happy_buck, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 06:16:57 compute-0 podman[76111]: 2025-11-29 06:16:57.937679273 +0000 UTC m=+0.270380371 container attach 9cb7ff071a0ac7275c01c024a4ee924e90b009397e5880c0d670d27b8cd96ff8 (image=quay.io/ceph/ceph:v18, name=happy_buck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 29 06:16:58 compute-0 ceph-mgr[74948]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 29 06:16:58 compute-0 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.14150 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 06:16:58 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_pub}] v 0) v1
Nov 29 06:16:58 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:16:58 compute-0 ceph-mgr[74948]: [cephadm INFO root] Set ssh ssh_identity_pub
Nov 29 06:16:58 compute-0 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_pub
Nov 29 06:16:58 compute-0 systemd[1]: libpod-9cb7ff071a0ac7275c01c024a4ee924e90b009397e5880c0d670d27b8cd96ff8.scope: Deactivated successfully.
Nov 29 06:16:58 compute-0 podman[76111]: 2025-11-29 06:16:58.546278258 +0000 UTC m=+0.878979406 container died 9cb7ff071a0ac7275c01c024a4ee924e90b009397e5880c0d670d27b8cd96ff8 (image=quay.io/ceph/ceph:v18, name=happy_buck, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 29 06:16:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-fbec127e9c8d2c9be1b995fe8bf8fc31ad906ce7f581d9a421eed389819f1290-merged.mount: Deactivated successfully.
Nov 29 06:16:58 compute-0 podman[76111]: 2025-11-29 06:16:58.605932944 +0000 UTC m=+0.938634042 container remove 9cb7ff071a0ac7275c01c024a4ee924e90b009397e5880c0d670d27b8cd96ff8 (image=quay.io/ceph/ceph:v18, name=happy_buck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 06:16:58 compute-0 systemd[1]: libpod-conmon-9cb7ff071a0ac7275c01c024a4ee924e90b009397e5880c0d670d27b8cd96ff8.scope: Deactivated successfully.
Nov 29 06:16:58 compute-0 podman[76165]: 2025-11-29 06:16:58.67943091 +0000 UTC m=+0.051224568 container create e1ef19578360b6b69ef1ba9d563050aadd0834d2807ef1f3834028e59465c26d (image=quay.io/ceph/ceph:v18, name=suspicious_satoshi, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 06:16:58 compute-0 systemd[1]: Started libpod-conmon-e1ef19578360b6b69ef1ba9d563050aadd0834d2807ef1f3834028e59465c26d.scope.
Nov 29 06:16:58 compute-0 podman[76165]: 2025-11-29 06:16:58.65111011 +0000 UTC m=+0.022903808 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 06:16:58 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:16:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/583d1a8dc69600fec0ed27aa9a9f7ca9cd05abbf3599a8622561d3b4fdcdcb63/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:16:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/583d1a8dc69600fec0ed27aa9a9f7ca9cd05abbf3599a8622561d3b4fdcdcb63/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 06:16:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/583d1a8dc69600fec0ed27aa9a9f7ca9cd05abbf3599a8622561d3b4fdcdcb63/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:16:58 compute-0 ceph-mon[74654]: from='client.14148 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 06:16:58 compute-0 ceph-mon[74654]: Set ssh ssh_identity_key
Nov 29 06:16:58 compute-0 ceph-mon[74654]: Set ssh private key
Nov 29 06:16:58 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:16:58 compute-0 podman[76165]: 2025-11-29 06:16:58.768344442 +0000 UTC m=+0.140138090 container init e1ef19578360b6b69ef1ba9d563050aadd0834d2807ef1f3834028e59465c26d (image=quay.io/ceph/ceph:v18, name=suspicious_satoshi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 29 06:16:58 compute-0 podman[76165]: 2025-11-29 06:16:58.775741651 +0000 UTC m=+0.147535309 container start e1ef19578360b6b69ef1ba9d563050aadd0834d2807ef1f3834028e59465c26d (image=quay.io/ceph/ceph:v18, name=suspicious_satoshi, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 29 06:16:58 compute-0 podman[76165]: 2025-11-29 06:16:58.779119397 +0000 UTC m=+0.150913025 container attach e1ef19578360b6b69ef1ba9d563050aadd0834d2807ef1f3834028e59465c26d (image=quay.io/ceph/ceph:v18, name=suspicious_satoshi, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 06:16:59 compute-0 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.14152 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 06:16:59 compute-0 suspicious_satoshi[76181]: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCu3bC5CnVOgMysWq+5APHTlM4aYzGGPRuaV3Iz9UGDOn9EfOSq89ba1ZYrKrCZUB2ran/6pGsjYG31TA1fEj7PJxj/KMHUXZPzA2OnYhngrow0DJlXLpAZyXEwCWnSXvNoXJgb+Ud550Hwu3I6cIXLfNiV0PeJy/vqOcH6IW0WeciHm6OCzzqtJz1SMRN/s41/Nlg8V/IqDT9xPkxz1bW1KAPpe1jOvvKpmdePRsd8IecvcTFX0ywbbVem+dv1+PDlXrXvNoyjA2zfibRBbkB6Gw2SWYp2G9Qsbf7kC0gEGWMwu2/vZAmvK/6aqb/D0r9z7hBfCzNCJFRrXW5bgxPGJN8q6pAKG3Bl/lDCya3x1lb50Tzraucim153k+46ML+IQYfoWFY17Xaa/tIYvaveLDDhXojDehUqhh8JYX/vkDMT/QnViiDNmskGirYuZG8steVIDcpvNGVStwn1Hb4XyPDP5/mSaD1oHMM5wZNHnZJG8WxJmyooKqNxOZDjnB0= zuul@controller
Nov 29 06:16:59 compute-0 systemd[1]: libpod-e1ef19578360b6b69ef1ba9d563050aadd0834d2807ef1f3834028e59465c26d.scope: Deactivated successfully.
Nov 29 06:16:59 compute-0 podman[76165]: 2025-11-29 06:16:59.306988671 +0000 UTC m=+0.678782299 container died e1ef19578360b6b69ef1ba9d563050aadd0834d2807ef1f3834028e59465c26d (image=quay.io/ceph/ceph:v18, name=suspicious_satoshi, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 29 06:16:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-583d1a8dc69600fec0ed27aa9a9f7ca9cd05abbf3599a8622561d3b4fdcdcb63-merged.mount: Deactivated successfully.
Nov 29 06:16:59 compute-0 podman[76165]: 2025-11-29 06:16:59.359494525 +0000 UTC m=+0.731288183 container remove e1ef19578360b6b69ef1ba9d563050aadd0834d2807ef1f3834028e59465c26d (image=quay.io/ceph/ceph:v18, name=suspicious_satoshi, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 29 06:16:59 compute-0 systemd[1]: libpod-conmon-e1ef19578360b6b69ef1ba9d563050aadd0834d2807ef1f3834028e59465c26d.scope: Deactivated successfully.
Nov 29 06:16:59 compute-0 podman[76218]: 2025-11-29 06:16:59.434640208 +0000 UTC m=+0.048568193 container create 8ebdd1ded4b48c391514ed886d17f784e8ef8f23fcc7e7e018653525d59d8cd2 (image=quay.io/ceph/ceph:v18, name=heuristic_ramanujan, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 06:16:59 compute-0 systemd[1]: Started libpod-conmon-8ebdd1ded4b48c391514ed886d17f784e8ef8f23fcc7e7e018653525d59d8cd2.scope.
Nov 29 06:16:59 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:16:59 compute-0 podman[76218]: 2025-11-29 06:16:59.412308447 +0000 UTC m=+0.026236482 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 06:16:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab2698566c29a9ad27f5f272b802476923100f2d5e9a6656d25c229e1bf051a5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:16:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab2698566c29a9ad27f5f272b802476923100f2d5e9a6656d25c229e1bf051a5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:16:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab2698566c29a9ad27f5f272b802476923100f2d5e9a6656d25c229e1bf051a5/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 06:16:59 compute-0 podman[76218]: 2025-11-29 06:16:59.534299644 +0000 UTC m=+0.148227609 container init 8ebdd1ded4b48c391514ed886d17f784e8ef8f23fcc7e7e018653525d59d8cd2 (image=quay.io/ceph/ceph:v18, name=heuristic_ramanujan, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 06:16:59 compute-0 podman[76218]: 2025-11-29 06:16:59.543467602 +0000 UTC m=+0.157395547 container start 8ebdd1ded4b48c391514ed886d17f784e8ef8f23fcc7e7e018653525d59d8cd2 (image=quay.io/ceph/ceph:v18, name=heuristic_ramanujan, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 29 06:16:59 compute-0 podman[76218]: 2025-11-29 06:16:59.549329448 +0000 UTC m=+0.163257593 container attach 8ebdd1ded4b48c391514ed886d17f784e8ef8f23fcc7e7e018653525d59d8cd2 (image=quay.io/ceph/ceph:v18, name=heuristic_ramanujan, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 29 06:16:59 compute-0 ceph-mon[74654]: from='client.14150 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 06:16:59 compute-0 ceph-mon[74654]: Set ssh ssh_identity_pub
Nov 29 06:17:00 compute-0 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.14154 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 06:17:00 compute-0 ceph-mgr[74948]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 29 06:17:00 compute-0 sshd-session[76263]: Accepted publickey for ceph-admin from 192.168.122.100 port 39850 ssh2: RSA SHA256:wSO38gUigzg+3qmbq5ZCXhMSnm1ow+14BbAXfOugcIA
Nov 29 06:17:00 compute-0 systemd-logind[797]: New session 21 of user ceph-admin.
Nov 29 06:17:00 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Nov 29 06:17:00 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Nov 29 06:17:00 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Nov 29 06:17:00 compute-0 systemd[1]: Starting User Manager for UID 42477...
Nov 29 06:17:00 compute-0 systemd[76267]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 29 06:17:00 compute-0 sshd-session[76274]: Accepted publickey for ceph-admin from 192.168.122.100 port 39852 ssh2: RSA SHA256:wSO38gUigzg+3qmbq5ZCXhMSnm1ow+14BbAXfOugcIA
Nov 29 06:17:00 compute-0 systemd-logind[797]: New session 23 of user ceph-admin.
Nov 29 06:17:00 compute-0 systemd[76267]: Queued start job for default target Main User Target.
Nov 29 06:17:00 compute-0 systemd[76267]: Created slice User Application Slice.
Nov 29 06:17:00 compute-0 systemd[76267]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 29 06:17:00 compute-0 systemd[76267]: Started Daily Cleanup of User's Temporary Directories.
Nov 29 06:17:00 compute-0 systemd[76267]: Reached target Paths.
Nov 29 06:17:00 compute-0 systemd[76267]: Reached target Timers.
Nov 29 06:17:00 compute-0 systemd[76267]: Starting D-Bus User Message Bus Socket...
Nov 29 06:17:00 compute-0 systemd[76267]: Starting Create User's Volatile Files and Directories...
Nov 29 06:17:00 compute-0 systemd[76267]: Finished Create User's Volatile Files and Directories.
Nov 29 06:17:00 compute-0 systemd[76267]: Listening on D-Bus User Message Bus Socket.
Nov 29 06:17:00 compute-0 systemd[76267]: Reached target Sockets.
Nov 29 06:17:00 compute-0 systemd[76267]: Reached target Basic System.
Nov 29 06:17:00 compute-0 systemd[76267]: Reached target Main User Target.
Nov 29 06:17:00 compute-0 systemd[76267]: Startup finished in 152ms.
Nov 29 06:17:00 compute-0 systemd[1]: Started User Manager for UID 42477.
Nov 29 06:17:00 compute-0 systemd[1]: Started Session 21 of User ceph-admin.
Nov 29 06:17:00 compute-0 systemd[1]: Started Session 23 of User ceph-admin.
Nov 29 06:17:00 compute-0 sshd-session[76263]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 29 06:17:00 compute-0 sshd-session[76274]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 29 06:17:00 compute-0 sshd-session[76259]: Received disconnect from 79.116.35.29 port 52338:11: Bye Bye [preauth]
Nov 29 06:17:00 compute-0 sshd-session[76259]: Disconnected from authenticating user root 79.116.35.29 port 52338 [preauth]
Nov 29 06:17:00 compute-0 sudo[76287]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:17:00 compute-0 sudo[76287]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:17:00 compute-0 sudo[76287]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:00 compute-0 ceph-mon[74654]: from='client.14152 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 06:17:00 compute-0 sudo[76312]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:17:00 compute-0 sudo[76312]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:17:00 compute-0 sudo[76312]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:00 compute-0 sshd-session[76337]: Accepted publickey for ceph-admin from 192.168.122.100 port 39864 ssh2: RSA SHA256:wSO38gUigzg+3qmbq5ZCXhMSnm1ow+14BbAXfOugcIA
Nov 29 06:17:01 compute-0 systemd-logind[797]: New session 24 of user ceph-admin.
Nov 29 06:17:01 compute-0 systemd[1]: Started Session 24 of User ceph-admin.
Nov 29 06:17:01 compute-0 sshd-session[76337]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 29 06:17:01 compute-0 sudo[76341]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:17:01 compute-0 sudo[76341]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:17:01 compute-0 sudo[76341]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:01 compute-0 sudo[76366]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host --expect-hostname compute-0
Nov 29 06:17:01 compute-0 sudo[76366]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:17:01 compute-0 sudo[76366]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:01 compute-0 sshd-session[76391]: Accepted publickey for ceph-admin from 192.168.122.100 port 39874 ssh2: RSA SHA256:wSO38gUigzg+3qmbq5ZCXhMSnm1ow+14BbAXfOugcIA
Nov 29 06:17:01 compute-0 systemd-logind[797]: New session 25 of user ceph-admin.
Nov 29 06:17:01 compute-0 systemd[1]: Started Session 25 of User ceph-admin.
Nov 29 06:17:01 compute-0 sshd-session[76391]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 29 06:17:01 compute-0 sudo[76395]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:17:01 compute-0 sudo[76395]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:17:01 compute-0 sudo[76395]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:01 compute-0 sudo[76420]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d
Nov 29 06:17:01 compute-0 sudo[76420]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:17:01 compute-0 sudo[76420]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:01 compute-0 ceph-mgr[74948]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-0
Nov 29 06:17:01 compute-0 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-0
Nov 29 06:17:01 compute-0 sshd-session[76445]: Accepted publickey for ceph-admin from 192.168.122.100 port 39880 ssh2: RSA SHA256:wSO38gUigzg+3qmbq5ZCXhMSnm1ow+14BbAXfOugcIA
Nov 29 06:17:01 compute-0 systemd-logind[797]: New session 26 of user ceph-admin.
Nov 29 06:17:01 compute-0 systemd[1]: Started Session 26 of User ceph-admin.
Nov 29 06:17:01 compute-0 sshd-session[76445]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 29 06:17:01 compute-0 ceph-mon[74654]: from='client.14154 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 06:17:01 compute-0 sudo[76449]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:17:01 compute-0 sudo[76449]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:17:01 compute-0 sudo[76449]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:02 compute-0 sudo[76474]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047
Nov 29 06:17:02 compute-0 sudo[76474]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:17:02 compute-0 sudo[76474]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:02 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020052984 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 06:17:02 compute-0 ceph-mgr[74948]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 29 06:17:02 compute-0 sshd-session[76499]: Accepted publickey for ceph-admin from 192.168.122.100 port 39888 ssh2: RSA SHA256:wSO38gUigzg+3qmbq5ZCXhMSnm1ow+14BbAXfOugcIA
Nov 29 06:17:02 compute-0 systemd-logind[797]: New session 27 of user ceph-admin.
Nov 29 06:17:02 compute-0 systemd[1]: Started Session 27 of User ceph-admin.
Nov 29 06:17:02 compute-0 sshd-session[76499]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 29 06:17:02 compute-0 sudo[76503]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:17:02 compute-0 sudo[76503]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:17:02 compute-0 sudo[76503]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:02 compute-0 sudo[76528]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-336ec58c-893b-528f-a0c1-6ed1196bc047/var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047
Nov 29 06:17:02 compute-0 sudo[76528]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:17:02 compute-0 sudo[76528]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:02 compute-0 sshd-session[76553]: Accepted publickey for ceph-admin from 192.168.122.100 port 48932 ssh2: RSA SHA256:wSO38gUigzg+3qmbq5ZCXhMSnm1ow+14BbAXfOugcIA
Nov 29 06:17:02 compute-0 systemd-logind[797]: New session 28 of user ceph-admin.
Nov 29 06:17:02 compute-0 systemd[1]: Started Session 28 of User ceph-admin.
Nov 29 06:17:02 compute-0 sshd-session[76553]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 29 06:17:02 compute-0 sudo[76557]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:17:02 compute-0 sudo[76557]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:17:02 compute-0 sudo[76557]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:02 compute-0 sudo[76582]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-336ec58c-893b-528f-a0c1-6ed1196bc047/var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d.new
Nov 29 06:17:02 compute-0 sudo[76582]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:17:02 compute-0 sudo[76582]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:02 compute-0 ceph-mon[74654]: Deploying cephadm binary to compute-0
Nov 29 06:17:03 compute-0 sshd-session[76607]: Accepted publickey for ceph-admin from 192.168.122.100 port 48944 ssh2: RSA SHA256:wSO38gUigzg+3qmbq5ZCXhMSnm1ow+14BbAXfOugcIA
Nov 29 06:17:03 compute-0 systemd-logind[797]: New session 29 of user ceph-admin.
Nov 29 06:17:03 compute-0 systemd[1]: Started Session 29 of User ceph-admin.
Nov 29 06:17:03 compute-0 sshd-session[76607]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 29 06:17:03 compute-0 sudo[76611]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:17:03 compute-0 sudo[76611]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:17:03 compute-0 sudo[76611]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:03 compute-0 sudo[76636]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-336ec58c-893b-528f-a0c1-6ed1196bc047
Nov 29 06:17:03 compute-0 sudo[76636]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:17:03 compute-0 sudo[76636]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:03 compute-0 sshd-session[76661]: Accepted publickey for ceph-admin from 192.168.122.100 port 48950 ssh2: RSA SHA256:wSO38gUigzg+3qmbq5ZCXhMSnm1ow+14BbAXfOugcIA
Nov 29 06:17:03 compute-0 systemd-logind[797]: New session 30 of user ceph-admin.
Nov 29 06:17:03 compute-0 systemd[1]: Started Session 30 of User ceph-admin.
Nov 29 06:17:03 compute-0 sshd-session[76661]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 29 06:17:03 compute-0 sudo[76665]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:17:03 compute-0 sudo[76665]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:17:03 compute-0 sudo[76665]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:03 compute-0 sudo[76690]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-336ec58c-893b-528f-a0c1-6ed1196bc047/var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d.new
Nov 29 06:17:03 compute-0 sudo[76690]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:17:03 compute-0 sudo[76690]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:03 compute-0 sshd-session[76715]: Accepted publickey for ceph-admin from 192.168.122.100 port 48960 ssh2: RSA SHA256:wSO38gUigzg+3qmbq5ZCXhMSnm1ow+14BbAXfOugcIA
Nov 29 06:17:03 compute-0 systemd-logind[797]: New session 31 of user ceph-admin.
Nov 29 06:17:03 compute-0 systemd[1]: Started Session 31 of User ceph-admin.
Nov 29 06:17:03 compute-0 sshd-session[76715]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 29 06:17:04 compute-0 ceph-mgr[74948]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 29 06:17:04 compute-0 sshd-session[76742]: Accepted publickey for ceph-admin from 192.168.122.100 port 48964 ssh2: RSA SHA256:wSO38gUigzg+3qmbq5ZCXhMSnm1ow+14BbAXfOugcIA
Nov 29 06:17:04 compute-0 systemd-logind[797]: New session 32 of user ceph-admin.
Nov 29 06:17:04 compute-0 systemd[1]: Started Session 32 of User ceph-admin.
Nov 29 06:17:04 compute-0 sshd-session[76742]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 29 06:17:04 compute-0 sudo[76746]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:17:04 compute-0 sudo[76746]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:17:04 compute-0 sudo[76746]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:04 compute-0 sudo[76771]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-336ec58c-893b-528f-a0c1-6ed1196bc047/var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d.new /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d
Nov 29 06:17:04 compute-0 sudo[76771]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:17:04 compute-0 sudo[76771]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:04 compute-0 sshd-session[76796]: Accepted publickey for ceph-admin from 192.168.122.100 port 48966 ssh2: RSA SHA256:wSO38gUigzg+3qmbq5ZCXhMSnm1ow+14BbAXfOugcIA
Nov 29 06:17:04 compute-0 systemd-logind[797]: New session 33 of user ceph-admin.
Nov 29 06:17:04 compute-0 systemd[1]: Started Session 33 of User ceph-admin.
Nov 29 06:17:04 compute-0 sshd-session[76796]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 29 06:17:05 compute-0 sudo[76800]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:17:05 compute-0 sudo[76800]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:17:05 compute-0 sudo[76800]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:05 compute-0 sudo[76825]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host --expect-hostname compute-0
Nov 29 06:17:05 compute-0 sudo[76825]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:17:05 compute-0 sudo[76825]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:05 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 29 06:17:05 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:17:05 compute-0 ceph-mgr[74948]: [cephadm INFO root] Added host compute-0
Nov 29 06:17:05 compute-0 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Added host compute-0
Nov 29 06:17:05 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 29 06:17:05 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 29 06:17:05 compute-0 heuristic_ramanujan[76234]: Added host 'compute-0' with addr '192.168.122.100'
Nov 29 06:17:05 compute-0 systemd[1]: libpod-8ebdd1ded4b48c391514ed886d17f784e8ef8f23fcc7e7e018653525d59d8cd2.scope: Deactivated successfully.
Nov 29 06:17:05 compute-0 podman[76218]: 2025-11-29 06:17:05.520789305 +0000 UTC m=+6.134717260 container died 8ebdd1ded4b48c391514ed886d17f784e8ef8f23fcc7e7e018653525d59d8cd2 (image=quay.io/ceph/ceph:v18, name=heuristic_ramanujan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 06:17:05 compute-0 sudo[76871]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:17:05 compute-0 sudo[76871]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:17:05 compute-0 sudo[76871]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:05 compute-0 sudo[76909]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:17:05 compute-0 sudo[76909]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:17:05 compute-0 sudo[76909]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:05 compute-0 sudo[76934]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:17:05 compute-0 sudo[76934]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:17:05 compute-0 sudo[76934]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-ab2698566c29a9ad27f5f272b802476923100f2d5e9a6656d25c229e1bf051a5-merged.mount: Deactivated successfully.
Nov 29 06:17:05 compute-0 podman[76218]: 2025-11-29 06:17:05.720422946 +0000 UTC m=+6.334350921 container remove 8ebdd1ded4b48c391514ed886d17f784e8ef8f23fcc7e7e018653525d59d8cd2 (image=quay.io/ceph/ceph:v18, name=heuristic_ramanujan, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 06:17:05 compute-0 systemd[1]: libpod-conmon-8ebdd1ded4b48c391514ed886d17f784e8ef8f23fcc7e7e018653525d59d8cd2.scope: Deactivated successfully.
Nov 29 06:17:05 compute-0 sudo[76960]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph:v18 --timeout 895 inspect-image
Nov 29 06:17:05 compute-0 sudo[76960]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:17:05 compute-0 podman[76983]: 2025-11-29 06:17:05.772911719 +0000 UTC m=+0.034825225 container create 553ec7b71fef2e274c801eef1a0ab25c12ada7226317694b9c80f8441335dbb4 (image=quay.io/ceph/ceph:v18, name=relaxed_bose, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 29 06:17:05 compute-0 systemd[1]: Started libpod-conmon-553ec7b71fef2e274c801eef1a0ab25c12ada7226317694b9c80f8441335dbb4.scope.
Nov 29 06:17:05 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:17:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/279bc1901eb31510ccc5d1add33805af8f4d776f9555c0e00d84d21c272d7565/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 06:17:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/279bc1901eb31510ccc5d1add33805af8f4d776f9555c0e00d84d21c272d7565/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:17:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/279bc1901eb31510ccc5d1add33805af8f4d776f9555c0e00d84d21c272d7565/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:17:05 compute-0 podman[76983]: 2025-11-29 06:17:05.842274149 +0000 UTC m=+0.104187685 container init 553ec7b71fef2e274c801eef1a0ab25c12ada7226317694b9c80f8441335dbb4 (image=quay.io/ceph/ceph:v18, name=relaxed_bose, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 29 06:17:05 compute-0 podman[76983]: 2025-11-29 06:17:05.847895747 +0000 UTC m=+0.109809263 container start 553ec7b71fef2e274c801eef1a0ab25c12ada7226317694b9c80f8441335dbb4 (image=quay.io/ceph/ceph:v18, name=relaxed_bose, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 06:17:05 compute-0 podman[76983]: 2025-11-29 06:17:05.851912601 +0000 UTC m=+0.113826147 container attach 553ec7b71fef2e274c801eef1a0ab25c12ada7226317694b9c80f8441335dbb4 (image=quay.io/ceph/ceph:v18, name=relaxed_bose, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 29 06:17:05 compute-0 podman[76983]: 2025-11-29 06:17:05.758685187 +0000 UTC m=+0.020598733 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 06:17:05 compute-0 podman[77032]: 2025-11-29 06:17:05.981667867 +0000 UTC m=+0.040303650 container create ddfda85c379129595370e272977267244290c91d9e3c4f22c4eb06fa11f84dd8 (image=quay.io/ceph/ceph:v18, name=funny_panini, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 06:17:06 compute-0 systemd[1]: Started libpod-conmon-ddfda85c379129595370e272977267244290c91d9e3c4f22c4eb06fa11f84dd8.scope.
Nov 29 06:17:06 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:17:06 compute-0 podman[77032]: 2025-11-29 06:17:06.043905425 +0000 UTC m=+0.102541228 container init ddfda85c379129595370e272977267244290c91d9e3c4f22c4eb06fa11f84dd8 (image=quay.io/ceph/ceph:v18, name=funny_panini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 29 06:17:06 compute-0 podman[77032]: 2025-11-29 06:17:06.048557487 +0000 UTC m=+0.107193270 container start ddfda85c379129595370e272977267244290c91d9e3c4f22c4eb06fa11f84dd8 (image=quay.io/ceph/ceph:v18, name=funny_panini, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 06:17:06 compute-0 podman[77032]: 2025-11-29 06:17:06.055959596 +0000 UTC m=+0.114595409 container attach ddfda85c379129595370e272977267244290c91d9e3c4f22c4eb06fa11f84dd8 (image=quay.io/ceph/ceph:v18, name=funny_panini, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 06:17:06 compute-0 podman[77032]: 2025-11-29 06:17:05.964702508 +0000 UTC m=+0.023338311 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 06:17:06 compute-0 ceph-mgr[74948]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 29 06:17:06 compute-0 funny_panini[77049]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)
Nov 29 06:17:06 compute-0 systemd[1]: libpod-ddfda85c379129595370e272977267244290c91d9e3c4f22c4eb06fa11f84dd8.scope: Deactivated successfully.
Nov 29 06:17:06 compute-0 podman[77032]: 2025-11-29 06:17:06.320579893 +0000 UTC m=+0.379215676 container died ddfda85c379129595370e272977267244290c91d9e3c4f22c4eb06fa11f84dd8 (image=quay.io/ceph/ceph:v18, name=funny_panini, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 06:17:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-4f4b80ac5c0a549d5657d25485b578c474ef3894f522dfa187adfb6d294cf5e3-merged.mount: Deactivated successfully.
Nov 29 06:17:06 compute-0 podman[77032]: 2025-11-29 06:17:06.364341379 +0000 UTC m=+0.422977162 container remove ddfda85c379129595370e272977267244290c91d9e3c4f22c4eb06fa11f84dd8 (image=quay.io/ceph/ceph:v18, name=funny_panini, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 06:17:06 compute-0 systemd[1]: libpod-conmon-ddfda85c379129595370e272977267244290c91d9e3c4f22c4eb06fa11f84dd8.scope: Deactivated successfully.
Nov 29 06:17:06 compute-0 sudo[76960]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:06 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=container_image}] v 0) v1
Nov 29 06:17:06 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:17:06 compute-0 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 06:17:06 compute-0 ceph-mgr[74948]: [cephadm INFO root] Saving service mon spec with placement count:5
Nov 29 06:17:06 compute-0 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Saving service mon spec with placement count:5
Nov 29 06:17:06 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Nov 29 06:17:06 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:17:06 compute-0 relaxed_bose[77001]: Scheduled mon update...
Nov 29 06:17:06 compute-0 systemd[1]: libpod-553ec7b71fef2e274c801eef1a0ab25c12ada7226317694b9c80f8441335dbb4.scope: Deactivated successfully.
Nov 29 06:17:06 compute-0 podman[76983]: 2025-11-29 06:17:06.475737977 +0000 UTC m=+0.737651523 container died 553ec7b71fef2e274c801eef1a0ab25c12ada7226317694b9c80f8441335dbb4 (image=quay.io/ceph/ceph:v18, name=relaxed_bose, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 06:17:06 compute-0 sudo[77085]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:17:06 compute-0 sudo[77085]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:17:06 compute-0 sudo[77085]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:06 compute-0 sudo[77119]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:17:06 compute-0 sudo[77119]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:17:06 compute-0 sudo[77119]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:06 compute-0 sudo[77145]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:17:06 compute-0 sudo[77145]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:17:06 compute-0 sudo[77145]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:06 compute-0 sudo[77170]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Nov 29 06:17:06 compute-0 sudo[77170]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:17:06 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:17:06 compute-0 ceph-mon[74654]: Added host compute-0
Nov 29 06:17:06 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 29 06:17:06 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:17:06 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:17:07 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054709 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 06:17:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-279bc1901eb31510ccc5d1add33805af8f4d776f9555c0e00d84d21c272d7565-merged.mount: Deactivated successfully.
Nov 29 06:17:07 compute-0 podman[76983]: 2025-11-29 06:17:07.401268766 +0000 UTC m=+1.663182332 container remove 553ec7b71fef2e274c801eef1a0ab25c12ada7226317694b9c80f8441335dbb4 (image=quay.io/ceph/ceph:v18, name=relaxed_bose, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 29 06:17:07 compute-0 systemd[1]: libpod-conmon-553ec7b71fef2e274c801eef1a0ab25c12ada7226317694b9c80f8441335dbb4.scope: Deactivated successfully.
Nov 29 06:17:07 compute-0 podman[77208]: 2025-11-29 06:17:07.477302084 +0000 UTC m=+0.050895718 container create af222e2306a1917f524e2f6d77fa9a65bfc61537f8044f221df98c8f93ab3c49 (image=quay.io/ceph/ceph:v18, name=mystifying_tu, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 06:17:07 compute-0 sudo[77170]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:07 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 06:17:07 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:17:07 compute-0 systemd[1]: Started libpod-conmon-af222e2306a1917f524e2f6d77fa9a65bfc61537f8044f221df98c8f93ab3c49.scope.
Nov 29 06:17:07 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:17:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/392b9e9b7c18b7c199db81beaca994e1a08743926ff62cceb228bba20cb6a679/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:17:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/392b9e9b7c18b7c199db81beaca994e1a08743926ff62cceb228bba20cb6a679/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 06:17:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/392b9e9b7c18b7c199db81beaca994e1a08743926ff62cceb228bba20cb6a679/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:17:07 compute-0 sudo[77232]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:17:07 compute-0 sudo[77232]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:17:07 compute-0 podman[77208]: 2025-11-29 06:17:07.455701434 +0000 UTC m=+0.029295078 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 06:17:07 compute-0 sudo[77232]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:07 compute-0 podman[77208]: 2025-11-29 06:17:07.557775438 +0000 UTC m=+0.131369092 container init af222e2306a1917f524e2f6d77fa9a65bfc61537f8044f221df98c8f93ab3c49 (image=quay.io/ceph/ceph:v18, name=mystifying_tu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 06:17:07 compute-0 podman[77208]: 2025-11-29 06:17:07.564123157 +0000 UTC m=+0.137716801 container start af222e2306a1917f524e2f6d77fa9a65bfc61537f8044f221df98c8f93ab3c49 (image=quay.io/ceph/ceph:v18, name=mystifying_tu, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 29 06:17:07 compute-0 podman[77208]: 2025-11-29 06:17:07.567275886 +0000 UTC m=+0.140869510 container attach af222e2306a1917f524e2f6d77fa9a65bfc61537f8044f221df98c8f93ab3c49 (image=quay.io/ceph/ceph:v18, name=mystifying_tu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default)
Nov 29 06:17:07 compute-0 sudo[77261]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:17:07 compute-0 sudo[77261]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:17:07 compute-0 sudo[77261]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:07 compute-0 sudo[77287]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:17:07 compute-0 sudo[77287]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:17:07 compute-0 sudo[77287]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:07 compute-0 sudo[77312]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 29 06:17:07 compute-0 sudo[77312]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:17:07 compute-0 ceph-mon[74654]: from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 06:17:07 compute-0 ceph-mon[74654]: Saving service mon spec with placement count:5
Nov 29 06:17:07 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:17:08 compute-0 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.14158 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 06:17:08 compute-0 ceph-mgr[74948]: [cephadm INFO root] Saving service mgr spec with placement count:2
Nov 29 06:17:08 compute-0 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement count:2
Nov 29 06:17:08 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 29 06:17:08 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:17:08 compute-0 mystifying_tu[77238]: Scheduled mgr update...
Nov 29 06:17:08 compute-0 systemd[1]: libpod-af222e2306a1917f524e2f6d77fa9a65bfc61537f8044f221df98c8f93ab3c49.scope: Deactivated successfully.
Nov 29 06:17:08 compute-0 podman[77208]: 2025-11-29 06:17:08.110788493 +0000 UTC m=+0.684382127 container died af222e2306a1917f524e2f6d77fa9a65bfc61537f8044f221df98c8f93ab3c49 (image=quay.io/ceph/ceph:v18, name=mystifying_tu, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 06:17:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-392b9e9b7c18b7c199db81beaca994e1a08743926ff62cceb228bba20cb6a679-merged.mount: Deactivated successfully.
Nov 29 06:17:08 compute-0 ceph-mgr[74948]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 29 06:17:08 compute-0 podman[77426]: 2025-11-29 06:17:08.175634255 +0000 UTC m=+0.072346135 container exec c3c8680245c67f710ba1b448e2d4c77c4c02bc368d31276f0332ad942957e3cf (image=quay.io/ceph/ceph:v18, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 06:17:08 compute-0 podman[77208]: 2025-11-29 06:17:08.186389019 +0000 UTC m=+0.759982643 container remove af222e2306a1917f524e2f6d77fa9a65bfc61537f8044f221df98c8f93ab3c49 (image=quay.io/ceph/ceph:v18, name=mystifying_tu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 06:17:08 compute-0 systemd[1]: libpod-conmon-af222e2306a1917f524e2f6d77fa9a65bfc61537f8044f221df98c8f93ab3c49.scope: Deactivated successfully.
Nov 29 06:17:08 compute-0 podman[77457]: 2025-11-29 06:17:08.264346931 +0000 UTC m=+0.054009397 container create d09b89a526a74a3b64e3a7e3eb8cc0a60e2a4482feff6465867912d76dbd31f0 (image=quay.io/ceph/ceph:v18, name=heuristic_shirley, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 06:17:08 compute-0 systemd[1]: Started libpod-conmon-d09b89a526a74a3b64e3a7e3eb8cc0a60e2a4482feff6465867912d76dbd31f0.scope.
Nov 29 06:17:08 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:17:08 compute-0 podman[77457]: 2025-11-29 06:17:08.239276903 +0000 UTC m=+0.028939469 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 06:17:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/323d29befc8c895075fe31d014387a0f8fca64fd32da3edc8ad45df019546b7e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:17:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/323d29befc8c895075fe31d014387a0f8fca64fd32da3edc8ad45df019546b7e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 06:17:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/323d29befc8c895075fe31d014387a0f8fca64fd32da3edc8ad45df019546b7e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:17:08 compute-0 podman[77457]: 2025-11-29 06:17:08.935172755 +0000 UTC m=+0.724835241 container init d09b89a526a74a3b64e3a7e3eb8cc0a60e2a4482feff6465867912d76dbd31f0 (image=quay.io/ceph/ceph:v18, name=heuristic_shirley, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 29 06:17:08 compute-0 podman[77457]: 2025-11-29 06:17:08.943235123 +0000 UTC m=+0.732897599 container start d09b89a526a74a3b64e3a7e3eb8cc0a60e2a4482feff6465867912d76dbd31f0 (image=quay.io/ceph/ceph:v18, name=heuristic_shirley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 29 06:17:09 compute-0 podman[77457]: 2025-11-29 06:17:09.039791541 +0000 UTC m=+0.829454057 container attach d09b89a526a74a3b64e3a7e3eb8cc0a60e2a4482feff6465867912d76dbd31f0 (image=quay.io/ceph/ceph:v18, name=heuristic_shirley, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 06:17:09 compute-0 ceph-mon[74654]: from='client.14158 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 06:17:09 compute-0 ceph-mon[74654]: Saving service mgr spec with placement count:2
Nov 29 06:17:09 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:17:09 compute-0 podman[77426]: 2025-11-29 06:17:09.098127399 +0000 UTC m=+0.994839279 container exec_died c3c8680245c67f710ba1b448e2d4c77c4c02bc368d31276f0332ad942957e3cf (image=quay.io/ceph/ceph:v18, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 29 06:17:09 compute-0 sudo[77312]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:09 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 06:17:09 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:17:09 compute-0 sudo[77527]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:17:09 compute-0 sudo[77527]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:17:09 compute-0 sudo[77527]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:09 compute-0 sudo[77552]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:17:09 compute-0 sudo[77552]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:17:09 compute-0 sudo[77552]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:09 compute-0 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.14160 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 06:17:09 compute-0 ceph-mgr[74948]: [cephadm INFO root] Saving service crash spec with placement *
Nov 29 06:17:09 compute-0 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Saving service crash spec with placement *
Nov 29 06:17:09 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Nov 29 06:17:09 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:17:09 compute-0 heuristic_shirley[77473]: Scheduled crash update...
Nov 29 06:17:09 compute-0 sudo[77577]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:17:09 compute-0 sudo[77577]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:17:09 compute-0 sudo[77577]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:09 compute-0 systemd[1]: libpod-d09b89a526a74a3b64e3a7e3eb8cc0a60e2a4482feff6465867912d76dbd31f0.scope: Deactivated successfully.
Nov 29 06:17:09 compute-0 podman[77457]: 2025-11-29 06:17:09.566929165 +0000 UTC m=+1.356591671 container died d09b89a526a74a3b64e3a7e3eb8cc0a60e2a4482feff6465867912d76dbd31f0 (image=quay.io/ceph/ceph:v18, name=heuristic_shirley, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 06:17:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-323d29befc8c895075fe31d014387a0f8fca64fd32da3edc8ad45df019546b7e-merged.mount: Deactivated successfully.
Nov 29 06:17:09 compute-0 podman[77457]: 2025-11-29 06:17:09.618556163 +0000 UTC m=+1.408218629 container remove d09b89a526a74a3b64e3a7e3eb8cc0a60e2a4482feff6465867912d76dbd31f0 (image=quay.io/ceph/ceph:v18, name=heuristic_shirley, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 06:17:09 compute-0 systemd[1]: libpod-conmon-d09b89a526a74a3b64e3a7e3eb8cc0a60e2a4482feff6465867912d76dbd31f0.scope: Deactivated successfully.
Nov 29 06:17:09 compute-0 sudo[77605]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 06:17:09 compute-0 sudo[77605]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:17:09 compute-0 podman[77640]: 2025-11-29 06:17:09.743638107 +0000 UTC m=+0.108605189 container create d095724c2436c6d176dc356cd11a8593884650f6c094a31cefc972f5b7cc2056 (image=quay.io/ceph/ceph:v18, name=gracious_blackwell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 29 06:17:09 compute-0 podman[77640]: 2025-11-29 06:17:09.654712905 +0000 UTC m=+0.019680007 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 06:17:09 compute-0 systemd[1]: Started libpod-conmon-d095724c2436c6d176dc356cd11a8593884650f6c094a31cefc972f5b7cc2056.scope.
Nov 29 06:17:10 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:17:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c0fa0258cf49b633df2f762b8fe2068488244ad08f9d66a3892c879c746ec83/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 06:17:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c0fa0258cf49b633df2f762b8fe2068488244ad08f9d66a3892c879c746ec83/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:17:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c0fa0258cf49b633df2f762b8fe2068488244ad08f9d66a3892c879c746ec83/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:17:10 compute-0 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 77674 (sysctl)
Nov 29 06:17:10 compute-0 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Nov 29 06:17:10 compute-0 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Nov 29 06:17:10 compute-0 podman[77640]: 2025-11-29 06:17:10.12030881 +0000 UTC m=+0.485275972 container init d095724c2436c6d176dc356cd11a8593884650f6c094a31cefc972f5b7cc2056 (image=quay.io/ceph/ceph:v18, name=gracious_blackwell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 06:17:10 compute-0 podman[77640]: 2025-11-29 06:17:10.132111953 +0000 UTC m=+0.497079075 container start d095724c2436c6d176dc356cd11a8593884650f6c094a31cefc972f5b7cc2056 (image=quay.io/ceph/ceph:v18, name=gracious_blackwell, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 29 06:17:10 compute-0 ceph-mgr[74948]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 29 06:17:10 compute-0 podman[77640]: 2025-11-29 06:17:10.277899362 +0000 UTC m=+0.642866464 container attach d095724c2436c6d176dc356cd11a8593884650f6c094a31cefc972f5b7cc2056 (image=quay.io/ceph/ceph:v18, name=gracious_blackwell, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 29 06:17:10 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:17:10 compute-0 ceph-mon[74654]: from='client.14160 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 06:17:10 compute-0 ceph-mon[74654]: Saving service crash spec with placement *
Nov 29 06:17:10 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:17:10 compute-0 sudo[77605]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:10 compute-0 sudo[77707]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:17:10 compute-0 sudo[77707]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:17:10 compute-0 sudo[77707]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:10 compute-0 sudo[77742]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:17:10 compute-0 sudo[77742]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:17:10 compute-0 sudo[77742]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:10 compute-0 sudo[77767]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:17:10 compute-0 sudo[77767]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:17:10 compute-0 sudo[77767]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:10 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/container_init}] v 0) v1
Nov 29 06:17:10 compute-0 sudo[77792]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 list-networks
Nov 29 06:17:10 compute-0 sudo[77792]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:17:10 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3953414744' entity='client.admin' 
Nov 29 06:17:10 compute-0 sudo[77792]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:10 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 06:17:10 compute-0 systemd[1]: libpod-d095724c2436c6d176dc356cd11a8593884650f6c094a31cefc972f5b7cc2056.scope: Deactivated successfully.
Nov 29 06:17:10 compute-0 podman[77640]: 2025-11-29 06:17:10.941393048 +0000 UTC m=+1.306360150 container died d095724c2436c6d176dc356cd11a8593884650f6c094a31cefc972f5b7cc2056 (image=quay.io/ceph/ceph:v18, name=gracious_blackwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 06:17:11 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:17:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-9c0fa0258cf49b633df2f762b8fe2068488244ad08f9d66a3892c879c746ec83-merged.mount: Deactivated successfully.
Nov 29 06:17:11 compute-0 podman[77640]: 2025-11-29 06:17:11.108211121 +0000 UTC m=+1.473178233 container remove d095724c2436c6d176dc356cd11a8593884650f6c094a31cefc972f5b7cc2056 (image=quay.io/ceph/ceph:v18, name=gracious_blackwell, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 29 06:17:11 compute-0 sudo[77847]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:17:11 compute-0 systemd[1]: libpod-conmon-d095724c2436c6d176dc356cd11a8593884650f6c094a31cefc972f5b7cc2056.scope: Deactivated successfully.
Nov 29 06:17:11 compute-0 sudo[77847]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:17:11 compute-0 sudo[77847]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:11 compute-0 podman[77871]: 2025-11-29 06:17:11.18532334 +0000 UTC m=+0.047819802 container create 221f969b23b1e00ca9353d68c7ffe28c9732d3bc6a33f6439bff9e1cbfdc079c (image=quay.io/ceph/ceph:v18, name=wonderful_wu, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 06:17:11 compute-0 systemd[1]: Started libpod-conmon-221f969b23b1e00ca9353d68c7ffe28c9732d3bc6a33f6439bff9e1cbfdc079c.scope.
Nov 29 06:17:11 compute-0 sudo[77880]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:17:11 compute-0 sudo[77880]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:17:11 compute-0 sudo[77880]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:11 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:17:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7618681bf68a8efdd5790696c30fa2e80a7304f04ee403c77db50097598818cf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:17:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7618681bf68a8efdd5790696c30fa2e80a7304f04ee403c77db50097598818cf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:17:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7618681bf68a8efdd5790696c30fa2e80a7304f04ee403c77db50097598818cf/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 06:17:11 compute-0 podman[77871]: 2025-11-29 06:17:11.164948954 +0000 UTC m=+0.027445476 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 06:17:11 compute-0 podman[77871]: 2025-11-29 06:17:11.278200134 +0000 UTC m=+0.140696636 container init 221f969b23b1e00ca9353d68c7ffe28c9732d3bc6a33f6439bff9e1cbfdc079c (image=quay.io/ceph/ceph:v18, name=wonderful_wu, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 06:17:11 compute-0 podman[77871]: 2025-11-29 06:17:11.288260278 +0000 UTC m=+0.150756740 container start 221f969b23b1e00ca9353d68c7ffe28c9732d3bc6a33f6439bff9e1cbfdc079c (image=quay.io/ceph/ceph:v18, name=wonderful_wu, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 29 06:17:11 compute-0 podman[77871]: 2025-11-29 06:17:11.291535121 +0000 UTC m=+0.154031743 container attach 221f969b23b1e00ca9353d68c7ffe28c9732d3bc6a33f6439bff9e1cbfdc079c (image=quay.io/ceph/ceph:v18, name=wonderful_wu, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 29 06:17:11 compute-0 sudo[77918]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:17:11 compute-0 sudo[77918]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:17:11 compute-0 sudo[77918]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:11 compute-0 sudo[77945]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -- inventory --format=json-pretty --filter-for-batch
Nov 29 06:17:11 compute-0 sudo[77945]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:17:11 compute-0 podman[78029]: 2025-11-29 06:17:11.73003563 +0000 UTC m=+0.060521291 container create c55f493ec37cbe86b6bba3bbeba9b1b27c6e39904a57c472d575f7fb9cbfe270 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_zhukovsky, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 29 06:17:11 compute-0 systemd[1]: Started libpod-conmon-c55f493ec37cbe86b6bba3bbeba9b1b27c6e39904a57c472d575f7fb9cbfe270.scope.
Nov 29 06:17:11 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:17:11 compute-0 podman[78029]: 2025-11-29 06:17:11.705031244 +0000 UTC m=+0.035516915 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:17:11 compute-0 podman[78029]: 2025-11-29 06:17:11.805617866 +0000 UTC m=+0.136103517 container init c55f493ec37cbe86b6bba3bbeba9b1b27c6e39904a57c472d575f7fb9cbfe270 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_zhukovsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 06:17:11 compute-0 podman[78029]: 2025-11-29 06:17:11.815190306 +0000 UTC m=+0.145675977 container start c55f493ec37cbe86b6bba3bbeba9b1b27c6e39904a57c472d575f7fb9cbfe270 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_zhukovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True)
Nov 29 06:17:11 compute-0 crazy_zhukovsky[78046]: 167 167
Nov 29 06:17:11 compute-0 systemd[1]: libpod-c55f493ec37cbe86b6bba3bbeba9b1b27c6e39904a57c472d575f7fb9cbfe270.scope: Deactivated successfully.
Nov 29 06:17:11 compute-0 podman[78029]: 2025-11-29 06:17:11.821412742 +0000 UTC m=+0.151898413 container attach c55f493ec37cbe86b6bba3bbeba9b1b27c6e39904a57c472d575f7fb9cbfe270 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_zhukovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 06:17:11 compute-0 podman[78029]: 2025-11-29 06:17:11.822315988 +0000 UTC m=+0.152801659 container died c55f493ec37cbe86b6bba3bbeba9b1b27c6e39904a57c472d575f7fb9cbfe270 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_zhukovsky, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 06:17:11 compute-0 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 06:17:11 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/client_keyrings}] v 0) v1
Nov 29 06:17:11 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:17:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-8e0f9cc1eb518a151bfc5d4a4cd582748020198f43dc90119f25e3e736df6d83-merged.mount: Deactivated successfully.
Nov 29 06:17:11 compute-0 podman[78029]: 2025-11-29 06:17:11.875159271 +0000 UTC m=+0.205644932 container remove c55f493ec37cbe86b6bba3bbeba9b1b27c6e39904a57c472d575f7fb9cbfe270 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_zhukovsky, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 29 06:17:11 compute-0 systemd[1]: libpod-221f969b23b1e00ca9353d68c7ffe28c9732d3bc6a33f6439bff9e1cbfdc079c.scope: Deactivated successfully.
Nov 29 06:17:11 compute-0 podman[77871]: 2025-11-29 06:17:11.883858057 +0000 UTC m=+0.746354559 container died 221f969b23b1e00ca9353d68c7ffe28c9732d3bc6a33f6439bff9e1cbfdc079c (image=quay.io/ceph/ceph:v18, name=wonderful_wu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 29 06:17:11 compute-0 systemd[1]: libpod-conmon-c55f493ec37cbe86b6bba3bbeba9b1b27c6e39904a57c472d575f7fb9cbfe270.scope: Deactivated successfully.
Nov 29 06:17:11 compute-0 ceph-mon[74654]: from='client.? 192.168.122.100:0/3953414744' entity='client.admin' 
Nov 29 06:17:11 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:17:11 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:17:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-7618681bf68a8efdd5790696c30fa2e80a7304f04ee403c77db50097598818cf-merged.mount: Deactivated successfully.
Nov 29 06:17:11 compute-0 podman[77871]: 2025-11-29 06:17:11.935433834 +0000 UTC m=+0.797930306 container remove 221f969b23b1e00ca9353d68c7ffe28c9732d3bc6a33f6439bff9e1cbfdc079c (image=quay.io/ceph/ceph:v18, name=wonderful_wu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 06:17:11 compute-0 systemd[1]: libpod-conmon-221f969b23b1e00ca9353d68c7ffe28c9732d3bc6a33f6439bff9e1cbfdc079c.scope: Deactivated successfully.
Nov 29 06:17:12 compute-0 podman[78080]: 2025-11-29 06:17:12.005159274 +0000 UTC m=+0.048518342 container create 10b590bc36eb38fa86611a0035f0855c18c515ad68dfbdbc4c7b0cf58d4c42b1 (image=quay.io/ceph/ceph:v18, name=ecstatic_satoshi, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 29 06:17:12 compute-0 systemd[1]: Started libpod-conmon-10b590bc36eb38fa86611a0035f0855c18c515ad68dfbdbc4c7b0cf58d4c42b1.scope.
Nov 29 06:17:12 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:17:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1ef56cb89087450bcf611ec02be5b120d07ea480a323cfff399ce490bfde68e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:17:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1ef56cb89087450bcf611ec02be5b120d07ea480a323cfff399ce490bfde68e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 06:17:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1ef56cb89087450bcf611ec02be5b120d07ea480a323cfff399ce490bfde68e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:17:12 compute-0 podman[78080]: 2025-11-29 06:17:11.984767598 +0000 UTC m=+0.028126696 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 06:17:12 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 06:17:12 compute-0 podman[78080]: 2025-11-29 06:17:12.081265074 +0000 UTC m=+0.124624152 container init 10b590bc36eb38fa86611a0035f0855c18c515ad68dfbdbc4c7b0cf58d4c42b1 (image=quay.io/ceph/ceph:v18, name=ecstatic_satoshi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 06:17:12 compute-0 podman[78080]: 2025-11-29 06:17:12.086643876 +0000 UTC m=+0.130002924 container start 10b590bc36eb38fa86611a0035f0855c18c515ad68dfbdbc4c7b0cf58d4c42b1 (image=quay.io/ceph/ceph:v18, name=ecstatic_satoshi, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 06:17:12 compute-0 podman[78080]: 2025-11-29 06:17:12.089679952 +0000 UTC m=+0.133039000 container attach 10b590bc36eb38fa86611a0035f0855c18c515ad68dfbdbc4c7b0cf58d4c42b1 (image=quay.io/ceph/ceph:v18, name=ecstatic_satoshi, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 06:17:12 compute-0 ceph-mgr[74948]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 29 06:17:12 compute-0 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.14166 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 06:17:12 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 29 06:17:12 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:17:12 compute-0 ceph-mgr[74948]: [cephadm INFO root] Added label _admin to host compute-0
Nov 29 06:17:12 compute-0 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Added label _admin to host compute-0
Nov 29 06:17:12 compute-0 ecstatic_satoshi[78096]: Added label _admin to host compute-0
Nov 29 06:17:12 compute-0 systemd[1]: libpod-10b590bc36eb38fa86611a0035f0855c18c515ad68dfbdbc4c7b0cf58d4c42b1.scope: Deactivated successfully.
Nov 29 06:17:12 compute-0 podman[78080]: 2025-11-29 06:17:12.662019853 +0000 UTC m=+0.705378901 container died 10b590bc36eb38fa86611a0035f0855c18c515ad68dfbdbc4c7b0cf58d4c42b1 (image=quay.io/ceph/ceph:v18, name=ecstatic_satoshi, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 29 06:17:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-b1ef56cb89087450bcf611ec02be5b120d07ea480a323cfff399ce490bfde68e-merged.mount: Deactivated successfully.
Nov 29 06:17:12 compute-0 podman[78080]: 2025-11-29 06:17:12.697743422 +0000 UTC m=+0.741102470 container remove 10b590bc36eb38fa86611a0035f0855c18c515ad68dfbdbc4c7b0cf58d4c42b1 (image=quay.io/ceph/ceph:v18, name=ecstatic_satoshi, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 06:17:12 compute-0 systemd[1]: libpod-conmon-10b590bc36eb38fa86611a0035f0855c18c515ad68dfbdbc4c7b0cf58d4c42b1.scope: Deactivated successfully.
Nov 29 06:17:12 compute-0 podman[78134]: 2025-11-29 06:17:12.755190625 +0000 UTC m=+0.038895220 container create a535818ae9d4b73dacba881aa1cbd58f8fffa77f8ae0148964a184519c183359 (image=quay.io/ceph/ceph:v18, name=silly_bassi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 06:17:12 compute-0 systemd[1]: Started libpod-conmon-a535818ae9d4b73dacba881aa1cbd58f8fffa77f8ae0148964a184519c183359.scope.
Nov 29 06:17:12 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:17:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ec7f341b41eb427e386022619e7a892054c1201a35460e5359cdc03f488b963/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:17:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ec7f341b41eb427e386022619e7a892054c1201a35460e5359cdc03f488b963/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 06:17:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ec7f341b41eb427e386022619e7a892054c1201a35460e5359cdc03f488b963/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:17:12 compute-0 podman[78134]: 2025-11-29 06:17:12.82223729 +0000 UTC m=+0.105941965 container init a535818ae9d4b73dacba881aa1cbd58f8fffa77f8ae0148964a184519c183359 (image=quay.io/ceph/ceph:v18, name=silly_bassi, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 29 06:17:12 compute-0 podman[78134]: 2025-11-29 06:17:12.827021235 +0000 UTC m=+0.110725820 container start a535818ae9d4b73dacba881aa1cbd58f8fffa77f8ae0148964a184519c183359 (image=quay.io/ceph/ceph:v18, name=silly_bassi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 06:17:12 compute-0 podman[78134]: 2025-11-29 06:17:12.831257734 +0000 UTC m=+0.114962339 container attach a535818ae9d4b73dacba881aa1cbd58f8fffa77f8ae0148964a184519c183359 (image=quay.io/ceph/ceph:v18, name=silly_bassi, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Nov 29 06:17:12 compute-0 podman[78134]: 2025-11-29 06:17:12.737748213 +0000 UTC m=+0.021452838 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 06:17:12 compute-0 ceph-mon[74654]: from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 06:17:12 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:17:13 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target_autotune}] v 0) v1
Nov 29 06:17:13 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2360873568' entity='client.admin' 
Nov 29 06:17:13 compute-0 systemd[1]: libpod-a535818ae9d4b73dacba881aa1cbd58f8fffa77f8ae0148964a184519c183359.scope: Deactivated successfully.
Nov 29 06:17:13 compute-0 podman[78134]: 2025-11-29 06:17:13.380235675 +0000 UTC m=+0.663940260 container died a535818ae9d4b73dacba881aa1cbd58f8fffa77f8ae0148964a184519c183359 (image=quay.io/ceph/ceph:v18, name=silly_bassi, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 06:17:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-8ec7f341b41eb427e386022619e7a892054c1201a35460e5359cdc03f488b963-merged.mount: Deactivated successfully.
Nov 29 06:17:13 compute-0 podman[78134]: 2025-11-29 06:17:13.4253631 +0000 UTC m=+0.709067705 container remove a535818ae9d4b73dacba881aa1cbd58f8fffa77f8ae0148964a184519c183359 (image=quay.io/ceph/ceph:v18, name=silly_bassi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 29 06:17:13 compute-0 systemd[1]: libpod-conmon-a535818ae9d4b73dacba881aa1cbd58f8fffa77f8ae0148964a184519c183359.scope: Deactivated successfully.
Nov 29 06:17:13 compute-0 podman[78192]: 2025-11-29 06:17:13.481816095 +0000 UTC m=+0.036792530 container create 45335a23046d9870740a62b1dd4e60e6fc0d2fa4e7aa60a384beb1098a55aeb9 (image=quay.io/ceph/ceph:v18, name=thirsty_williams, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 29 06:17:13 compute-0 systemd[1]: Started libpod-conmon-45335a23046d9870740a62b1dd4e60e6fc0d2fa4e7aa60a384beb1098a55aeb9.scope.
Nov 29 06:17:13 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:17:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94ab7bd71468b6d938eb28546ab86f469bbbc9ddabbfc1bc17ec0f4343ee2904/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:17:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94ab7bd71468b6d938eb28546ab86f469bbbc9ddabbfc1bc17ec0f4343ee2904/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 06:17:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94ab7bd71468b6d938eb28546ab86f469bbbc9ddabbfc1bc17ec0f4343ee2904/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:17:13 compute-0 podman[78192]: 2025-11-29 06:17:13.542555832 +0000 UTC m=+0.097532287 container init 45335a23046d9870740a62b1dd4e60e6fc0d2fa4e7aa60a384beb1098a55aeb9 (image=quay.io/ceph/ceph:v18, name=thirsty_williams, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 06:17:13 compute-0 podman[78192]: 2025-11-29 06:17:13.550367192 +0000 UTC m=+0.105343637 container start 45335a23046d9870740a62b1dd4e60e6fc0d2fa4e7aa60a384beb1098a55aeb9 (image=quay.io/ceph/ceph:v18, name=thirsty_williams, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 06:17:13 compute-0 podman[78192]: 2025-11-29 06:17:13.553672496 +0000 UTC m=+0.108648951 container attach 45335a23046d9870740a62b1dd4e60e6fc0d2fa4e7aa60a384beb1098a55aeb9 (image=quay.io/ceph/ceph:v18, name=thirsty_williams, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 06:17:13 compute-0 podman[78192]: 2025-11-29 06:17:13.465528995 +0000 UTC m=+0.020505470 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 06:17:13 compute-0 ceph-mon[74654]: from='client.14166 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 06:17:13 compute-0 ceph-mon[74654]: Added label _admin to host compute-0
Nov 29 06:17:13 compute-0 ceph-mon[74654]: from='client.? 192.168.122.100:0/2360873568' entity='client.admin' 
Nov 29 06:17:13 compute-0 sshd-session[78179]: Invalid user autcom from 138.124.186.225 port 50160
Nov 29 06:17:14 compute-0 sshd-session[78179]: Received disconnect from 138.124.186.225 port 50160:11: Bye Bye [preauth]
Nov 29 06:17:14 compute-0 sshd-session[78179]: Disconnected from invalid user autcom 138.124.186.225 port 50160 [preauth]
Nov 29 06:17:14 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/dashboard/cluster/status}] v 0) v1
Nov 29 06:17:14 compute-0 ceph-mgr[74948]: mgr.server send_report Giving up on OSDs that haven't reported yet, sending potentially incomplete PG state to mon
Nov 29 06:17:14 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 06:17:14 compute-0 ceph-mon[74654]: log_channel(cluster) log [WRN] : Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Nov 29 06:17:14 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3784537934' entity='client.admin' 
Nov 29 06:17:14 compute-0 thirsty_williams[78209]: set mgr/dashboard/cluster/status
Nov 29 06:17:14 compute-0 systemd[1]: libpod-45335a23046d9870740a62b1dd4e60e6fc0d2fa4e7aa60a384beb1098a55aeb9.scope: Deactivated successfully.
Nov 29 06:17:14 compute-0 podman[78192]: 2025-11-29 06:17:14.188584944 +0000 UTC m=+0.743561439 container died 45335a23046d9870740a62b1dd4e60e6fc0d2fa4e7aa60a384beb1098a55aeb9 (image=quay.io/ceph/ceph:v18, name=thirsty_williams, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 06:17:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-94ab7bd71468b6d938eb28546ab86f469bbbc9ddabbfc1bc17ec0f4343ee2904-merged.mount: Deactivated successfully.
Nov 29 06:17:14 compute-0 podman[78192]: 2025-11-29 06:17:14.236121117 +0000 UTC m=+0.791097592 container remove 45335a23046d9870740a62b1dd4e60e6fc0d2fa4e7aa60a384beb1098a55aeb9 (image=quay.io/ceph/ceph:v18, name=thirsty_williams, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 29 06:17:14 compute-0 systemd[1]: libpod-conmon-45335a23046d9870740a62b1dd4e60e6fc0d2fa4e7aa60a384beb1098a55aeb9.scope: Deactivated successfully.
Nov 29 06:17:14 compute-0 sudo[73602]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:14 compute-0 podman[78255]: 2025-11-29 06:17:14.439917445 +0000 UTC m=+0.059716798 container create 49bb05fea5177c262012eed7abd7461739ec083067bc75c1cccf36f94651d01f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_hodgkin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 06:17:14 compute-0 systemd[1]: Started libpod-conmon-49bb05fea5177c262012eed7abd7461739ec083067bc75c1cccf36f94651d01f.scope.
Nov 29 06:17:14 compute-0 podman[78255]: 2025-11-29 06:17:14.420475466 +0000 UTC m=+0.040274849 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:17:14 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:17:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41b8953503029af7fd8d9420301237f6150c6188d1969f922675499fccf504f4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 06:17:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41b8953503029af7fd8d9420301237f6150c6188d1969f922675499fccf504f4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:17:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41b8953503029af7fd8d9420301237f6150c6188d1969f922675499fccf504f4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:17:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41b8953503029af7fd8d9420301237f6150c6188d1969f922675499fccf504f4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 06:17:14 compute-0 podman[78255]: 2025-11-29 06:17:14.532330006 +0000 UTC m=+0.152129349 container init 49bb05fea5177c262012eed7abd7461739ec083067bc75c1cccf36f94651d01f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_hodgkin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 29 06:17:14 compute-0 podman[78255]: 2025-11-29 06:17:14.539296443 +0000 UTC m=+0.159095786 container start 49bb05fea5177c262012eed7abd7461739ec083067bc75c1cccf36f94651d01f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_hodgkin, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 29 06:17:14 compute-0 podman[78255]: 2025-11-29 06:17:14.542516694 +0000 UTC m=+0.162316037 container attach 49bb05fea5177c262012eed7abd7461739ec083067bc75c1cccf36f94651d01f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_hodgkin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 29 06:17:14 compute-0 sudo[78299]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wqpasmrgozrfjhgyysuryqmsomvnwebp ; /usr/bin/python3'
Nov 29 06:17:14 compute-0 sudo[78299]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:17:14 compute-0 python3[78301]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/cephadm/use_repo_digest false
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:17:14 compute-0 podman[78302]: 2025-11-29 06:17:14.894647063 +0000 UTC m=+0.038748926 container create be859d020910b2595caed9387fd1384ac9d9592ad4d2e4c3d73b442d9530c1b5 (image=quay.io/ceph/ceph:v18, name=eager_ramanujan, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 29 06:17:14 compute-0 ceph-mon[74654]: Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Nov 29 06:17:14 compute-0 ceph-mon[74654]: from='client.? 192.168.122.100:0/3784537934' entity='client.admin' 
Nov 29 06:17:14 compute-0 systemd[1]: Started libpod-conmon-be859d020910b2595caed9387fd1384ac9d9592ad4d2e4c3d73b442d9530c1b5.scope.
Nov 29 06:17:14 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:17:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4273110c6a81d4fa888ecb5fdc938f4f4d4f7d5c399d7d52b1f25071a0c00c3/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:17:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4273110c6a81d4fa888ecb5fdc938f4f4d4f7d5c399d7d52b1f25071a0c00c3/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:17:14 compute-0 podman[78302]: 2025-11-29 06:17:14.962193421 +0000 UTC m=+0.106295304 container init be859d020910b2595caed9387fd1384ac9d9592ad4d2e4c3d73b442d9530c1b5 (image=quay.io/ceph/ceph:v18, name=eager_ramanujan, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 06:17:14 compute-0 podman[78302]: 2025-11-29 06:17:14.968134329 +0000 UTC m=+0.112236192 container start be859d020910b2595caed9387fd1384ac9d9592ad4d2e4c3d73b442d9530c1b5 (image=quay.io/ceph/ceph:v18, name=eager_ramanujan, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 06:17:14 compute-0 podman[78302]: 2025-11-29 06:17:14.877001144 +0000 UTC m=+0.021103027 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 06:17:14 compute-0 podman[78302]: 2025-11-29 06:17:14.971258137 +0000 UTC m=+0.115360010 container attach be859d020910b2595caed9387fd1384ac9d9592ad4d2e4c3d73b442d9530c1b5 (image=quay.io/ceph/ceph:v18, name=eager_ramanujan, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 06:17:15 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/use_repo_digest}] v 0) v1
Nov 29 06:17:15 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2625088103' entity='client.admin' 
Nov 29 06:17:15 compute-0 systemd[1]: libpod-be859d020910b2595caed9387fd1384ac9d9592ad4d2e4c3d73b442d9530c1b5.scope: Deactivated successfully.
Nov 29 06:17:15 compute-0 podman[78302]: 2025-11-29 06:17:15.578795563 +0000 UTC m=+0.722897416 container died be859d020910b2595caed9387fd1384ac9d9592ad4d2e4c3d73b442d9530c1b5 (image=quay.io/ceph/ceph:v18, name=eager_ramanujan, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 29 06:17:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-e4273110c6a81d4fa888ecb5fdc938f4f4d4f7d5c399d7d52b1f25071a0c00c3-merged.mount: Deactivated successfully.
Nov 29 06:17:15 compute-0 podman[78302]: 2025-11-29 06:17:15.626529902 +0000 UTC m=+0.770631765 container remove be859d020910b2595caed9387fd1384ac9d9592ad4d2e4c3d73b442d9530c1b5 (image=quay.io/ceph/ceph:v18, name=eager_ramanujan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 06:17:15 compute-0 systemd[1]: libpod-conmon-be859d020910b2595caed9387fd1384ac9d9592ad4d2e4c3d73b442d9530c1b5.scope: Deactivated successfully.
Nov 29 06:17:15 compute-0 sudo[78299]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:15 compute-0 ecstatic_hodgkin[78271]: [
Nov 29 06:17:15 compute-0 ecstatic_hodgkin[78271]:     {
Nov 29 06:17:15 compute-0 ecstatic_hodgkin[78271]:         "available": false,
Nov 29 06:17:15 compute-0 ecstatic_hodgkin[78271]:         "ceph_device": false,
Nov 29 06:17:15 compute-0 ecstatic_hodgkin[78271]:         "device_id": "QEMU_DVD-ROM_QM00001",
Nov 29 06:17:15 compute-0 ecstatic_hodgkin[78271]:         "lsm_data": {},
Nov 29 06:17:15 compute-0 ecstatic_hodgkin[78271]:         "lvs": [],
Nov 29 06:17:15 compute-0 ecstatic_hodgkin[78271]:         "path": "/dev/sr0",
Nov 29 06:17:15 compute-0 ecstatic_hodgkin[78271]:         "rejected_reasons": [
Nov 29 06:17:15 compute-0 ecstatic_hodgkin[78271]:             "Has a FileSystem",
Nov 29 06:17:15 compute-0 ecstatic_hodgkin[78271]:             "Insufficient space (<5GB)"
Nov 29 06:17:15 compute-0 ecstatic_hodgkin[78271]:         ],
Nov 29 06:17:15 compute-0 ecstatic_hodgkin[78271]:         "sys_api": {
Nov 29 06:17:15 compute-0 ecstatic_hodgkin[78271]:             "actuators": null,
Nov 29 06:17:15 compute-0 ecstatic_hodgkin[78271]:             "device_nodes": "sr0",
Nov 29 06:17:15 compute-0 ecstatic_hodgkin[78271]:             "devname": "sr0",
Nov 29 06:17:15 compute-0 ecstatic_hodgkin[78271]:             "human_readable_size": "482.00 KB",
Nov 29 06:17:15 compute-0 ecstatic_hodgkin[78271]:             "id_bus": "ata",
Nov 29 06:17:15 compute-0 ecstatic_hodgkin[78271]:             "model": "QEMU DVD-ROM",
Nov 29 06:17:15 compute-0 ecstatic_hodgkin[78271]:             "nr_requests": "2",
Nov 29 06:17:15 compute-0 ecstatic_hodgkin[78271]:             "parent": "/dev/sr0",
Nov 29 06:17:15 compute-0 ecstatic_hodgkin[78271]:             "partitions": {},
Nov 29 06:17:15 compute-0 ecstatic_hodgkin[78271]:             "path": "/dev/sr0",
Nov 29 06:17:15 compute-0 ecstatic_hodgkin[78271]:             "removable": "1",
Nov 29 06:17:15 compute-0 ecstatic_hodgkin[78271]:             "rev": "2.5+",
Nov 29 06:17:15 compute-0 ecstatic_hodgkin[78271]:             "ro": "0",
Nov 29 06:17:15 compute-0 ecstatic_hodgkin[78271]:             "rotational": "1",
Nov 29 06:17:15 compute-0 ecstatic_hodgkin[78271]:             "sas_address": "",
Nov 29 06:17:15 compute-0 ecstatic_hodgkin[78271]:             "sas_device_handle": "",
Nov 29 06:17:15 compute-0 ecstatic_hodgkin[78271]:             "scheduler_mode": "mq-deadline",
Nov 29 06:17:15 compute-0 ecstatic_hodgkin[78271]:             "sectors": 0,
Nov 29 06:17:15 compute-0 ecstatic_hodgkin[78271]:             "sectorsize": "2048",
Nov 29 06:17:15 compute-0 ecstatic_hodgkin[78271]:             "size": 493568.0,
Nov 29 06:17:15 compute-0 ecstatic_hodgkin[78271]:             "support_discard": "2048",
Nov 29 06:17:15 compute-0 ecstatic_hodgkin[78271]:             "type": "disk",
Nov 29 06:17:15 compute-0 ecstatic_hodgkin[78271]:             "vendor": "QEMU"
Nov 29 06:17:15 compute-0 ecstatic_hodgkin[78271]:         }
Nov 29 06:17:15 compute-0 ecstatic_hodgkin[78271]:     }
Nov 29 06:17:15 compute-0 ecstatic_hodgkin[78271]: ]
Nov 29 06:17:15 compute-0 systemd[1]: libpod-49bb05fea5177c262012eed7abd7461739ec083067bc75c1cccf36f94651d01f.scope: Deactivated successfully.
Nov 29 06:17:15 compute-0 systemd[1]: libpod-49bb05fea5177c262012eed7abd7461739ec083067bc75c1cccf36f94651d01f.scope: Consumed 1.153s CPU time.
Nov 29 06:17:15 compute-0 conmon[78271]: conmon 49bb05fea5177c262012 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-49bb05fea5177c262012eed7abd7461739ec083067bc75c1cccf36f94651d01f.scope/container/memory.events
Nov 29 06:17:15 compute-0 podman[78255]: 2025-11-29 06:17:15.717671507 +0000 UTC m=+1.337470850 container died 49bb05fea5177c262012eed7abd7461739ec083067bc75c1cccf36f94651d01f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_hodgkin, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3)
Nov 29 06:17:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-41b8953503029af7fd8d9420301237f6150c6188d1969f922675499fccf504f4-merged.mount: Deactivated successfully.
Nov 29 06:17:15 compute-0 podman[78255]: 2025-11-29 06:17:15.775980064 +0000 UTC m=+1.395779397 container remove 49bb05fea5177c262012eed7abd7461739ec083067bc75c1cccf36f94651d01f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_hodgkin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 06:17:15 compute-0 systemd[1]: libpod-conmon-49bb05fea5177c262012eed7abd7461739ec083067bc75c1cccf36f94651d01f.scope: Deactivated successfully.
Nov 29 06:17:15 compute-0 sudo[77945]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:15 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 06:17:15 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:17:15 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 06:17:15 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:17:15 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 06:17:15 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:17:15 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 06:17:15 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:17:15 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 29 06:17:15 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 06:17:15 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 06:17:15 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:17:15 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 06:17:15 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 06:17:15 compute-0 ceph-mgr[74948]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Nov 29 06:17:15 compute-0 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Nov 29 06:17:15 compute-0 sudo[79414]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:17:15 compute-0 sudo[79414]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:17:15 compute-0 sudo[79414]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:15 compute-0 ceph-mon[74654]: pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 06:17:15 compute-0 ceph-mon[74654]: from='client.? 192.168.122.100:0/2625088103' entity='client.admin' 
Nov 29 06:17:15 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:17:15 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:17:15 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:17:15 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:17:15 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 06:17:15 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:17:15 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 06:17:15 compute-0 sudo[79439]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Nov 29 06:17:15 compute-0 sudo[79439]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:17:15 compute-0 sudo[79439]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:16 compute-0 sudo[79464]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:17:16 compute-0 sudo[79464]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:17:16 compute-0 sudo[79464]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:16 compute-0 sudo[79489]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-336ec58c-893b-528f-a0c1-6ed1196bc047/etc/ceph
Nov 29 06:17:16 compute-0 sudo[79489]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:17:16 compute-0 sudo[79489]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:16 compute-0 sudo[79537]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:17:16 compute-0 sudo[79537]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:17:16 compute-0 sudo[79537]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:16 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 06:17:16 compute-0 sudo[79589]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-336ec58c-893b-528f-a0c1-6ed1196bc047/etc/ceph/ceph.conf.new
Nov 29 06:17:16 compute-0 sudo[79589]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:17:16 compute-0 sudo[79589]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:16 compute-0 sudo[79634]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:17:16 compute-0 sudo[79634]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:17:16 compute-0 sudo[79634]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:16 compute-0 sudo[79664]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-336ec58c-893b-528f-a0c1-6ed1196bc047
Nov 29 06:17:16 compute-0 sudo[79664]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:17:16 compute-0 sudo[79664]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:16 compute-0 sudo[79689]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:17:16 compute-0 sudo[79689]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:17:16 compute-0 sudo[79689]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:16 compute-0 sudo[79714]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-336ec58c-893b-528f-a0c1-6ed1196bc047/etc/ceph/ceph.conf.new
Nov 29 06:17:16 compute-0 sudo[79714]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:17:16 compute-0 sudo[79714]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:16 compute-0 sudo[79797]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:17:16 compute-0 sudo[79797]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:17:16 compute-0 sudo[79797]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:16 compute-0 sudo[79871]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-akzrmzijztznryuzkoqniknovcdhixoa ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764397036.092439-37281-231699099922651/async_wrapper.py j282143511716 30 /home/zuul/.ansible/tmp/ansible-tmp-1764397036.092439-37281-231699099922651/AnsiballZ_command.py _'
Nov 29 06:17:16 compute-0 sudo[79871]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:17:16 compute-0 sudo[79843]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-336ec58c-893b-528f-a0c1-6ed1196bc047/etc/ceph/ceph.conf.new
Nov 29 06:17:16 compute-0 sudo[79843]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:17:16 compute-0 sudo[79843]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:16 compute-0 sudo[79887]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:17:16 compute-0 sudo[79887]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:17:16 compute-0 sudo[79887]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:16 compute-0 sudo[79912]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-336ec58c-893b-528f-a0c1-6ed1196bc047/etc/ceph/ceph.conf.new
Nov 29 06:17:16 compute-0 sudo[79912]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:17:16 compute-0 sudo[79912]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:16 compute-0 ansible-async_wrapper.py[79884]: Invoked with j282143511716 30 /home/zuul/.ansible/tmp/ansible-tmp-1764397036.092439-37281-231699099922651/AnsiballZ_command.py _
Nov 29 06:17:16 compute-0 ansible-async_wrapper.py[79945]: Starting module and watcher
Nov 29 06:17:16 compute-0 ansible-async_wrapper.py[79945]: Start watching 79947 (30)
Nov 29 06:17:16 compute-0 ansible-async_wrapper.py[79947]: Start module (79947)
Nov 29 06:17:16 compute-0 ansible-async_wrapper.py[79884]: Return async_wrapper task started.
Nov 29 06:17:16 compute-0 sudo[79871]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:16 compute-0 sudo[79937]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:17:16 compute-0 sudo[79937]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:17:16 compute-0 sudo[79937]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:16 compute-0 sudo[79967]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-336ec58c-893b-528f-a0c1-6ed1196bc047/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Nov 29 06:17:16 compute-0 sudo[79967]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:17:16 compute-0 sudo[79967]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:16 compute-0 ceph-mgr[74948]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/config/ceph.conf
Nov 29 06:17:16 compute-0 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/config/ceph.conf
Nov 29 06:17:16 compute-0 sudo[79992]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:17:16 compute-0 python3[79954]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:17:16 compute-0 sudo[79992]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:17:16 compute-0 sudo[79992]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:16 compute-0 ceph-mon[74654]: Updating compute-0:/etc/ceph/ceph.conf
Nov 29 06:17:17 compute-0 sudo[80018]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/config
Nov 29 06:17:17 compute-0 sudo[80018]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:17:17 compute-0 sudo[80018]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:17 compute-0 podman[80016]: 2025-11-29 06:17:17.041763018 +0000 UTC m=+0.049980993 container create 7fd992022eb1e2deed43264b9bd8e25273892572a88862104337830e85d1ce5a (image=quay.io/ceph/ceph:v18, name=competent_elion, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 06:17:17 compute-0 systemd[1]: Started libpod-conmon-7fd992022eb1e2deed43264b9bd8e25273892572a88862104337830e85d1ce5a.scope.
Nov 29 06:17:17 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 06:17:17 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:17:17 compute-0 sudo[80055]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:17:17 compute-0 sudo[80055]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:17:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0aadb614c7479e60acc0ae5cf9247596a14e29de8b0801421525fa2684b3657/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:17:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0aadb614c7479e60acc0ae5cf9247596a14e29de8b0801421525fa2684b3657/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:17:17 compute-0 sudo[80055]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:17 compute-0 podman[80016]: 2025-11-29 06:17:17.023549303 +0000 UTC m=+0.031767288 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 06:17:17 compute-0 podman[80016]: 2025-11-29 06:17:17.127324275 +0000 UTC m=+0.135542280 container init 7fd992022eb1e2deed43264b9bd8e25273892572a88862104337830e85d1ce5a (image=quay.io/ceph/ceph:v18, name=competent_elion, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 06:17:17 compute-0 podman[80016]: 2025-11-29 06:17:17.133920142 +0000 UTC m=+0.142138137 container start 7fd992022eb1e2deed43264b9bd8e25273892572a88862104337830e85d1ce5a (image=quay.io/ceph/ceph:v18, name=competent_elion, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 06:17:17 compute-0 sudo[80085]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-336ec58c-893b-528f-a0c1-6ed1196bc047/var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/config
Nov 29 06:17:17 compute-0 sudo[80085]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:17:17 compute-0 sudo[80085]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:17 compute-0 podman[80016]: 2025-11-29 06:17:17.171565565 +0000 UTC m=+0.179783560 container attach 7fd992022eb1e2deed43264b9bd8e25273892572a88862104337830e85d1ce5a (image=quay.io/ceph/ceph:v18, name=competent_elion, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 29 06:17:17 compute-0 sudo[80111]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:17:17 compute-0 sudo[80111]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:17:17 compute-0 sudo[80111]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:17 compute-0 sudo[80136]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-336ec58c-893b-528f-a0c1-6ed1196bc047/var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/config/ceph.conf.new
Nov 29 06:17:17 compute-0 sudo[80136]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:17:17 compute-0 sudo[80136]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:17 compute-0 sudo[80161]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:17:17 compute-0 sudo[80161]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:17:17 compute-0 sudo[80161]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:17 compute-0 sudo[80186]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-336ec58c-893b-528f-a0c1-6ed1196bc047
Nov 29 06:17:17 compute-0 sudo[80186]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:17:17 compute-0 sudo[80186]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:17 compute-0 sudo[80211]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:17:17 compute-0 sudo[80211]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:17:17 compute-0 sudo[80211]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:17 compute-0 sudo[80255]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-336ec58c-893b-528f-a0c1-6ed1196bc047/var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/config/ceph.conf.new
Nov 29 06:17:17 compute-0 sudo[80255]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:17:17 compute-0 sudo[80255]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:17 compute-0 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 29 06:17:17 compute-0 competent_elion[80080]: 
Nov 29 06:17:17 compute-0 competent_elion[80080]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Nov 29 06:17:17 compute-0 sudo[80303]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:17:17 compute-0 systemd[1]: libpod-7fd992022eb1e2deed43264b9bd8e25273892572a88862104337830e85d1ce5a.scope: Deactivated successfully.
Nov 29 06:17:17 compute-0 podman[80016]: 2025-11-29 06:17:17.676686556 +0000 UTC m=+0.684904541 container died 7fd992022eb1e2deed43264b9bd8e25273892572a88862104337830e85d1ce5a (image=quay.io/ceph/ceph:v18, name=competent_elion, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 29 06:17:17 compute-0 sudo[80303]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:17:17 compute-0 sudo[80303]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-c0aadb614c7479e60acc0ae5cf9247596a14e29de8b0801421525fa2684b3657-merged.mount: Deactivated successfully.
Nov 29 06:17:17 compute-0 podman[80016]: 2025-11-29 06:17:17.76813068 +0000 UTC m=+0.776348655 container remove 7fd992022eb1e2deed43264b9bd8e25273892572a88862104337830e85d1ce5a (image=quay.io/ceph/ceph:v18, name=competent_elion, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 06:17:17 compute-0 sudo[80331]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-336ec58c-893b-528f-a0c1-6ed1196bc047/var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/config/ceph.conf.new
Nov 29 06:17:17 compute-0 sudo[80331]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:17:17 compute-0 sudo[80331]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:17 compute-0 ansible-async_wrapper.py[79947]: Module complete (79947)
Nov 29 06:17:17 compute-0 systemd[1]: libpod-conmon-7fd992022eb1e2deed43264b9bd8e25273892572a88862104337830e85d1ce5a.scope: Deactivated successfully.
Nov 29 06:17:17 compute-0 sudo[80368]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:17:17 compute-0 sudo[80368]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:17:17 compute-0 sudo[80368]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:17 compute-0 sudo[80393]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-336ec58c-893b-528f-a0c1-6ed1196bc047/var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/config/ceph.conf.new
Nov 29 06:17:17 compute-0 sudo[80393]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:17:17 compute-0 sudo[80393]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:17 compute-0 sudo[80441]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:17:17 compute-0 sudo[80441]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:17:18 compute-0 sudo[80441]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:18 compute-0 ceph-mon[74654]: pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 06:17:18 compute-0 ceph-mon[74654]: Updating compute-0:/var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/config/ceph.conf
Nov 29 06:17:18 compute-0 sudo[80466]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-336ec58c-893b-528f-a0c1-6ed1196bc047/var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/config/ceph.conf.new /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/config/ceph.conf
Nov 29 06:17:18 compute-0 sudo[80466]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:17:18 compute-0 sudo[80466]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:18 compute-0 ceph-mgr[74948]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Nov 29 06:17:18 compute-0 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Nov 29 06:17:18 compute-0 sudo[80491]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:17:18 compute-0 sudo[80491]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:17:18 compute-0 sudo[80491]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:18 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 06:17:18 compute-0 sudo[80516]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Nov 29 06:17:18 compute-0 sudo[80516]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:17:18 compute-0 sudo[80516]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:18 compute-0 sudo[80572]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ztemfhrwlebzbamjgxcmvjatlldhputg ; /usr/bin/python3'
Nov 29 06:17:18 compute-0 sudo[80572]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:17:18 compute-0 sudo[80554]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:17:18 compute-0 sudo[80554]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:17:18 compute-0 sudo[80554]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:18 compute-0 sudo[80592]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-336ec58c-893b-528f-a0c1-6ed1196bc047/etc/ceph
Nov 29 06:17:18 compute-0 sudo[80592]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:17:18 compute-0 sudo[80592]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:18 compute-0 sudo[80617]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:17:18 compute-0 sudo[80617]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:17:18 compute-0 sudo[80617]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:18 compute-0 python3[80589]: ansible-ansible.legacy.async_status Invoked with jid=j282143511716.79884 mode=status _async_dir=/root/.ansible_async
Nov 29 06:17:18 compute-0 sudo[80572]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:18 compute-0 sudo[80642]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-336ec58c-893b-528f-a0c1-6ed1196bc047/etc/ceph/ceph.client.admin.keyring.new
Nov 29 06:17:18 compute-0 sudo[80642]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:17:18 compute-0 sudo[80642]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:18 compute-0 sudo[80690]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:17:18 compute-0 sudo[80690]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:17:18 compute-0 sudo[80690]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:18 compute-0 sudo[80742]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ooslfknygqjprdjixkabmdxysssapjgd ; /usr/bin/python3'
Nov 29 06:17:18 compute-0 sudo[80742]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:17:18 compute-0 sudo[80736]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-336ec58c-893b-528f-a0c1-6ed1196bc047
Nov 29 06:17:18 compute-0 sudo[80736]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:17:18 compute-0 sudo[80736]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:18 compute-0 sudo[80766]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:17:18 compute-0 sudo[80766]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:17:18 compute-0 sudo[80766]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:18 compute-0 python3[80758]: ansible-ansible.legacy.async_status Invoked with jid=j282143511716.79884 mode=cleanup _async_dir=/root/.ansible_async
Nov 29 06:17:18 compute-0 sudo[80742]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:18 compute-0 sudo[80791]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-336ec58c-893b-528f-a0c1-6ed1196bc047/etc/ceph/ceph.client.admin.keyring.new
Nov 29 06:17:18 compute-0 sudo[80791]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:17:18 compute-0 sudo[80791]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:18 compute-0 sudo[80839]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:17:18 compute-0 sudo[80839]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:17:18 compute-0 sudo[80839]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:18 compute-0 sudo[80864]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-336ec58c-893b-528f-a0c1-6ed1196bc047/etc/ceph/ceph.client.admin.keyring.new
Nov 29 06:17:18 compute-0 sudo[80864]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:17:18 compute-0 sudo[80864]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:19 compute-0 sudo[80889]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:17:19 compute-0 sudo[80935]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjnfwaovznabcbkwfbpbbzjzsfvjorcd ; /usr/bin/python3'
Nov 29 06:17:19 compute-0 sudo[80889]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:17:19 compute-0 sudo[80935]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:17:19 compute-0 ceph-mon[74654]: from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 29 06:17:19 compute-0 sudo[80889]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:19 compute-0 sudo[80942]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-336ec58c-893b-528f-a0c1-6ed1196bc047/etc/ceph/ceph.client.admin.keyring.new
Nov 29 06:17:19 compute-0 sudo[80942]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:17:19 compute-0 sudo[80942]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:19 compute-0 python3[80941]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 29 06:17:19 compute-0 sudo[80967]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:17:19 compute-0 sudo[80967]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:17:19 compute-0 sudo[80967]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:19 compute-0 sudo[80935]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:19 compute-0 sudo[80993]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-336ec58c-893b-528f-a0c1-6ed1196bc047/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Nov 29 06:17:19 compute-0 sudo[80993]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:17:19 compute-0 sudo[80993]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:19 compute-0 ceph-mgr[74948]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/config/ceph.client.admin.keyring
Nov 29 06:17:19 compute-0 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/config/ceph.client.admin.keyring
Nov 29 06:17:19 compute-0 sudo[81019]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:17:19 compute-0 sudo[81019]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:17:19 compute-0 sudo[81019]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:19 compute-0 sudo[81044]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/config
Nov 29 06:17:19 compute-0 sudo[81044]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:17:19 compute-0 sudo[81044]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:19 compute-0 sudo[81069]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:17:19 compute-0 sudo[81069]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:17:19 compute-0 sudo[81069]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:20 compute-0 sudo[81094]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-336ec58c-893b-528f-a0c1-6ed1196bc047/var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/config
Nov 29 06:17:20 compute-0 sudo[81094]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:17:20 compute-0 sudo[81094]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:20 compute-0 sudo[81142]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-grsfrjyscadtwgvotkzaijodzdgscywo ; /usr/bin/python3'
Nov 29 06:17:20 compute-0 sudo[81142]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:17:20 compute-0 sudo[81143]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:17:20 compute-0 sudo[81143]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:17:20 compute-0 sudo[81143]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:20 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 06:17:20 compute-0 sudo[81170]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-336ec58c-893b-528f-a0c1-6ed1196bc047/var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/config/ceph.client.admin.keyring.new
Nov 29 06:17:20 compute-0 sudo[81170]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:17:20 compute-0 sudo[81170]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:20 compute-0 python3[81151]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:17:20 compute-0 sudo[81195]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:17:20 compute-0 sudo[81195]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:17:20 compute-0 sudo[81195]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:20 compute-0 sudo[81229]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-336ec58c-893b-528f-a0c1-6ed1196bc047
Nov 29 06:17:20 compute-0 sudo[81229]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:17:20 compute-0 sudo[81229]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:20 compute-0 podman[81211]: 2025-11-29 06:17:20.358005834 +0000 UTC m=+0.103731699 container create 504b3a173884cf1bf24a8325ee20161bc7a1e4646bcfe8aa2fcc8cb599bb3295 (image=quay.io/ceph/ceph:v18, name=peaceful_ardinghelli, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 06:17:20 compute-0 podman[81211]: 2025-11-29 06:17:20.300436257 +0000 UTC m=+0.046162122 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 06:17:20 compute-0 systemd[1]: Started libpod-conmon-504b3a173884cf1bf24a8325ee20161bc7a1e4646bcfe8aa2fcc8cb599bb3295.scope.
Nov 29 06:17:20 compute-0 sudo[81258]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:17:20 compute-0 sudo[81258]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:17:20 compute-0 sudo[81258]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:20 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:17:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e0f036e20ab04281b1a06efe010afa5c54d17c1afbef0782828a18c36ca8d99/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 29 06:17:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e0f036e20ab04281b1a06efe010afa5c54d17c1afbef0782828a18c36ca8d99/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:17:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e0f036e20ab04281b1a06efe010afa5c54d17c1afbef0782828a18c36ca8d99/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:17:20 compute-0 podman[81211]: 2025-11-29 06:17:20.447564466 +0000 UTC m=+0.193290421 container init 504b3a173884cf1bf24a8325ee20161bc7a1e4646bcfe8aa2fcc8cb599bb3295 (image=quay.io/ceph/ceph:v18, name=peaceful_ardinghelli, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 29 06:17:20 compute-0 podman[81211]: 2025-11-29 06:17:20.457905761 +0000 UTC m=+0.203631626 container start 504b3a173884cf1bf24a8325ee20161bc7a1e4646bcfe8aa2fcc8cb599bb3295 (image=quay.io/ceph/ceph:v18, name=peaceful_ardinghelli, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 06:17:20 compute-0 podman[81211]: 2025-11-29 06:17:20.461754732 +0000 UTC m=+0.207480607 container attach 504b3a173884cf1bf24a8325ee20161bc7a1e4646bcfe8aa2fcc8cb599bb3295 (image=quay.io/ceph/ceph:v18, name=peaceful_ardinghelli, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 29 06:17:20 compute-0 sudo[81288]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-336ec58c-893b-528f-a0c1-6ed1196bc047/var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/config/ceph.client.admin.keyring.new
Nov 29 06:17:20 compute-0 sudo[81288]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:17:20 compute-0 sudo[81288]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:20 compute-0 ceph-mon[74654]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Nov 29 06:17:20 compute-0 ceph-mon[74654]: pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 06:17:20 compute-0 ceph-mon[74654]: Updating compute-0:/var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/config/ceph.client.admin.keyring
Nov 29 06:17:20 compute-0 ceph-mon[74654]: pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 06:17:20 compute-0 sudo[81337]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:17:20 compute-0 sudo[81337]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:17:20 compute-0 sudo[81337]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:20 compute-0 sudo[81362]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-336ec58c-893b-528f-a0c1-6ed1196bc047/var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/config/ceph.client.admin.keyring.new
Nov 29 06:17:20 compute-0 sudo[81362]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:17:20 compute-0 sudo[81362]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:20 compute-0 sudo[81387]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:17:20 compute-0 sudo[81387]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:17:20 compute-0 sudo[81387]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:20 compute-0 sudo[81431]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-336ec58c-893b-528f-a0c1-6ed1196bc047/var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/config/ceph.client.admin.keyring.new
Nov 29 06:17:20 compute-0 sudo[81431]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:17:20 compute-0 sudo[81431]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:20 compute-0 sudo[81456]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:17:20 compute-0 sudo[81456]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:17:20 compute-0 sudo[81456]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:21 compute-0 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 29 06:17:21 compute-0 peaceful_ardinghelli[81283]: 
Nov 29 06:17:21 compute-0 peaceful_ardinghelli[81283]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Nov 29 06:17:21 compute-0 systemd[1]: libpod-504b3a173884cf1bf24a8325ee20161bc7a1e4646bcfe8aa2fcc8cb599bb3295.scope: Deactivated successfully.
Nov 29 06:17:21 compute-0 podman[81211]: 2025-11-29 06:17:21.028265737 +0000 UTC m=+0.773991642 container died 504b3a173884cf1bf24a8325ee20161bc7a1e4646bcfe8aa2fcc8cb599bb3295 (image=quay.io/ceph/ceph:v18, name=peaceful_ardinghelli, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 06:17:21 compute-0 sudo[81481]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-336ec58c-893b-528f-a0c1-6ed1196bc047/var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/config/ceph.client.admin.keyring.new /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/config/ceph.client.admin.keyring
Nov 29 06:17:21 compute-0 sudo[81481]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:17:21 compute-0 sudo[81481]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:21 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 06:17:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-9e0f036e20ab04281b1a06efe010afa5c54d17c1afbef0782828a18c36ca8d99-merged.mount: Deactivated successfully.
Nov 29 06:17:21 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:17:21 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 06:17:21 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:17:21 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 06:17:21 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:17:21 compute-0 ceph-mgr[74948]: [progress INFO root] update: starting ev d5b7596b-4bc5-43ef-9c91-457e672e09b3 (Updating crash deployment (+1 -> 1))
Nov 29 06:17:21 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) v1
Nov 29 06:17:21 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 29 06:17:21 compute-0 podman[81211]: 2025-11-29 06:17:21.08533578 +0000 UTC m=+0.831061685 container remove 504b3a173884cf1bf24a8325ee20161bc7a1e4646bcfe8aa2fcc8cb599bb3295 (image=quay.io/ceph/ceph:v18, name=peaceful_ardinghelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 29 06:17:21 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Nov 29 06:17:21 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 06:17:21 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:17:21 compute-0 ceph-mgr[74948]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-0 on compute-0
Nov 29 06:17:21 compute-0 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-0 on compute-0
Nov 29 06:17:21 compute-0 systemd[1]: libpod-conmon-504b3a173884cf1bf24a8325ee20161bc7a1e4646bcfe8aa2fcc8cb599bb3295.scope: Deactivated successfully.
Nov 29 06:17:21 compute-0 sudo[81142]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:21 compute-0 sudo[81518]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:17:21 compute-0 sudo[81518]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:17:21 compute-0 sudo[81518]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:21 compute-0 sudo[81543]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:17:21 compute-0 sudo[81543]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:17:21 compute-0 sudo[81543]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:21 compute-0 sudo[81568]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:17:21 compute-0 sudo[81568]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:17:21 compute-0 sudo[81568]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:21 compute-0 sudo[81593]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047
Nov 29 06:17:21 compute-0 sudo[81593]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:17:21 compute-0 sudo[81641]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mioykjsadadnzzufrbcmkmfnxkdyveru ; /usr/bin/python3'
Nov 29 06:17:21 compute-0 sudo[81641]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:17:21 compute-0 python3[81643]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:17:21 compute-0 podman[81663]: 2025-11-29 06:17:21.716395983 +0000 UTC m=+0.062062106 container create 24addaaae691ccf46c16f57c99a0468ad7ca75d1e846f7f337f549ea5211b819 (image=quay.io/ceph/ceph:v18, name=thirsty_davinci, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 06:17:21 compute-0 systemd[1]: Started libpod-conmon-24addaaae691ccf46c16f57c99a0468ad7ca75d1e846f7f337f549ea5211b819.scope.
Nov 29 06:17:21 compute-0 podman[81663]: 2025-11-29 06:17:21.692943103 +0000 UTC m=+0.038609276 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 06:17:21 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:17:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/108e201ac71738fa7b2a0642dcf7b514cea40f30e8242e7dc01a2c72437d7504/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:17:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/108e201ac71738fa7b2a0642dcf7b514cea40f30e8242e7dc01a2c72437d7504/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:17:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/108e201ac71738fa7b2a0642dcf7b514cea40f30e8242e7dc01a2c72437d7504/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 29 06:17:21 compute-0 podman[81704]: 2025-11-29 06:17:21.806655056 +0000 UTC m=+0.056380754 container create 732fa840deaa0eb2e67e851ed82af63f06ae451586814f1c26ef7ac2fb340c21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_shamir, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 06:17:21 compute-0 ansible-async_wrapper.py[79945]: Done in kid B.
Nov 29 06:17:21 compute-0 podman[81663]: 2025-11-29 06:17:21.828220753 +0000 UTC m=+0.173886886 container init 24addaaae691ccf46c16f57c99a0468ad7ca75d1e846f7f337f549ea5211b819 (image=quay.io/ceph/ceph:v18, name=thirsty_davinci, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 06:17:21 compute-0 podman[81663]: 2025-11-29 06:17:21.835278124 +0000 UTC m=+0.180944247 container start 24addaaae691ccf46c16f57c99a0468ad7ca75d1e846f7f337f549ea5211b819 (image=quay.io/ceph/ceph:v18, name=thirsty_davinci, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 06:17:21 compute-0 podman[81663]: 2025-11-29 06:17:21.838911118 +0000 UTC m=+0.184577281 container attach 24addaaae691ccf46c16f57c99a0468ad7ca75d1e846f7f337f549ea5211b819 (image=quay.io/ceph/ceph:v18, name=thirsty_davinci, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 29 06:17:21 compute-0 systemd[1]: Started libpod-conmon-732fa840deaa0eb2e67e851ed82af63f06ae451586814f1c26ef7ac2fb340c21.scope.
Nov 29 06:17:21 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:17:21 compute-0 podman[81704]: 2025-11-29 06:17:21.779622372 +0000 UTC m=+0.029348140 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:17:21 compute-0 podman[81704]: 2025-11-29 06:17:21.882110314 +0000 UTC m=+0.131836032 container init 732fa840deaa0eb2e67e851ed82af63f06ae451586814f1c26ef7ac2fb340c21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_shamir, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 29 06:17:21 compute-0 podman[81704]: 2025-11-29 06:17:21.89174093 +0000 UTC m=+0.141466618 container start 732fa840deaa0eb2e67e851ed82af63f06ae451586814f1c26ef7ac2fb340c21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_shamir, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 06:17:21 compute-0 zealous_shamir[81722]: 167 167
Nov 29 06:17:21 compute-0 systemd[1]: libpod-732fa840deaa0eb2e67e851ed82af63f06ae451586814f1c26ef7ac2fb340c21.scope: Deactivated successfully.
Nov 29 06:17:21 compute-0 podman[81704]: 2025-11-29 06:17:21.89630853 +0000 UTC m=+0.146034218 container attach 732fa840deaa0eb2e67e851ed82af63f06ae451586814f1c26ef7ac2fb340c21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_shamir, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 29 06:17:21 compute-0 podman[81704]: 2025-11-29 06:17:21.897128004 +0000 UTC m=+0.146853692 container died 732fa840deaa0eb2e67e851ed82af63f06ae451586814f1c26ef7ac2fb340c21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_shamir, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 06:17:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-3b23363f840d0a3b785f08c140075b551350cfed1ab8029ac4c887457a67af4e-merged.mount: Deactivated successfully.
Nov 29 06:17:21 compute-0 podman[81704]: 2025-11-29 06:17:21.935724808 +0000 UTC m=+0.185450496 container remove 732fa840deaa0eb2e67e851ed82af63f06ae451586814f1c26ef7ac2fb340c21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_shamir, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 29 06:17:21 compute-0 systemd[1]: libpod-conmon-732fa840deaa0eb2e67e851ed82af63f06ae451586814f1c26ef7ac2fb340c21.scope: Deactivated successfully.
Nov 29 06:17:21 compute-0 systemd[1]: Reloading.
Nov 29 06:17:22 compute-0 ceph-mon[74654]: from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 29 06:17:22 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:17:22 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:17:22 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:17:22 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 29 06:17:22 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Nov 29 06:17:22 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:17:22 compute-0 ceph-mon[74654]: Deploying daemon crash.compute-0 on compute-0
Nov 29 06:17:22 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 06:17:22 compute-0 systemd-rc-local-generator[81769]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 06:17:22 compute-0 systemd-sysv-generator[81772]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 06:17:22 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 06:17:22 compute-0 systemd[1]: Reloading.
Nov 29 06:17:22 compute-0 systemd-sysv-generator[81831]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 06:17:22 compute-0 systemd-rc-local-generator[81828]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 06:17:22 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=log_to_file}] v 0) v1
Nov 29 06:17:22 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/819257723' entity='client.admin' 
Nov 29 06:17:22 compute-0 podman[81838]: 2025-11-29 06:17:22.483135858 +0000 UTC m=+0.027357433 container died 24addaaae691ccf46c16f57c99a0468ad7ca75d1e846f7f337f549ea5211b819 (image=quay.io/ceph/ceph:v18, name=thirsty_davinci, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 06:17:22 compute-0 systemd[1]: libpod-24addaaae691ccf46c16f57c99a0468ad7ca75d1e846f7f337f549ea5211b819.scope: Deactivated successfully.
Nov 29 06:17:22 compute-0 sshd-session[80938]: Invalid user gitea from 115.190.37.201 port 53990
Nov 29 06:17:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-108e201ac71738fa7b2a0642dcf7b514cea40f30e8242e7dc01a2c72437d7504-merged.mount: Deactivated successfully.
Nov 29 06:17:22 compute-0 podman[81838]: 2025-11-29 06:17:22.582093179 +0000 UTC m=+0.126314754 container remove 24addaaae691ccf46c16f57c99a0468ad7ca75d1e846f7f337f549ea5211b819 (image=quay.io/ceph/ceph:v18, name=thirsty_davinci, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 06:17:22 compute-0 systemd[1]: Starting Ceph crash.compute-0 for 336ec58c-893b-528f-a0c1-6ed1196bc047...
Nov 29 06:17:22 compute-0 systemd[1]: libpod-conmon-24addaaae691ccf46c16f57c99a0468ad7ca75d1e846f7f337f549ea5211b819.scope: Deactivated successfully.
Nov 29 06:17:22 compute-0 sudo[81641]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:22 compute-0 sudo[81915]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jspseahtmvfexiimxnpmvllplszlfjzx ; /usr/bin/python3'
Nov 29 06:17:22 compute-0 sudo[81915]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:17:22 compute-0 python3[81918]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global mon_cluster_log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:17:22 compute-0 sshd-session[80938]: Received disconnect from 115.190.37.201 port 53990:11: Bye Bye [preauth]
Nov 29 06:17:22 compute-0 sshd-session[80938]: Disconnected from invalid user gitea 115.190.37.201 port 53990 [preauth]
Nov 29 06:17:22 compute-0 podman[81929]: 2025-11-29 06:17:22.932937886 +0000 UTC m=+0.074718578 container create 47d65a8aff6f8bb06b14bb6c7e55e80de34011b4a202edc5f9e2d357b0f6e97f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-crash-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 29 06:17:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d11ca774cc4f124d5b59323e04a96c93480c6ed05f7ffca6950cb7de29f22fc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:17:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d11ca774cc4f124d5b59323e04a96c93480c6ed05f7ffca6950cb7de29f22fc/merged/etc/ceph/ceph.client.crash.compute-0.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 06:17:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d11ca774cc4f124d5b59323e04a96c93480c6ed05f7ffca6950cb7de29f22fc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:17:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d11ca774cc4f124d5b59323e04a96c93480c6ed05f7ffca6950cb7de29f22fc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 06:17:22 compute-0 podman[81929]: 2025-11-29 06:17:22.90057262 +0000 UTC m=+0.042353312 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:17:23 compute-0 podman[81929]: 2025-11-29 06:17:23.009512027 +0000 UTC m=+0.151292699 container init 47d65a8aff6f8bb06b14bb6c7e55e80de34011b4a202edc5f9e2d357b0f6e97f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-crash-compute-0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 06:17:23 compute-0 podman[81929]: 2025-11-29 06:17:23.018497694 +0000 UTC m=+0.160278346 container start 47d65a8aff6f8bb06b14bb6c7e55e80de34011b4a202edc5f9e2d357b0f6e97f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-crash-compute-0, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 29 06:17:23 compute-0 bash[81929]: 47d65a8aff6f8bb06b14bb6c7e55e80de34011b4a202edc5f9e2d357b0f6e97f
Nov 29 06:17:23 compute-0 podman[81942]: 2025-11-29 06:17:23.027771839 +0000 UTC m=+0.070523188 container create 666e40e078db47c71a223dfd93ef3475ff80af3b9dba5a24eb613fbd75e6ebb5 (image=quay.io/ceph/ceph:v18, name=competent_bell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 06:17:23 compute-0 systemd[1]: Started Ceph crash.compute-0 for 336ec58c-893b-528f-a0c1-6ed1196bc047.
Nov 29 06:17:23 compute-0 sudo[81593]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:23 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 06:17:23 compute-0 systemd[1]: Started libpod-conmon-666e40e078db47c71a223dfd93ef3475ff80af3b9dba5a24eb613fbd75e6ebb5.scope.
Nov 29 06:17:23 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:17:23 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 06:17:23 compute-0 podman[81942]: 2025-11-29 06:17:23.006087809 +0000 UTC m=+0.048839188 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 06:17:23 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:17:23 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Nov 29 06:17:23 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:17:23 compute-0 ceph-mgr[74948]: [progress INFO root] complete: finished ev d5b7596b-4bc5-43ef-9c91-457e672e09b3 (Updating crash deployment (+1 -> 1))
Nov 29 06:17:23 compute-0 ceph-mgr[74948]: [progress INFO root] Completed event d5b7596b-4bc5-43ef-9c91-457e672e09b3 (Updating crash deployment (+1 -> 1)) in 2 seconds
Nov 29 06:17:23 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:17:23 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Nov 29 06:17:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a369139f81ea6fbe78ae6367bffd71959dbcd1b53b42e225e33b891bf2ca560e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:17:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a369139f81ea6fbe78ae6367bffd71959dbcd1b53b42e225e33b891bf2ca560e/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 29 06:17:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a369139f81ea6fbe78ae6367bffd71959dbcd1b53b42e225e33b891bf2ca560e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:17:23 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:17:23 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev 33c939e7-5213-46e1-a759-288e8057c6b0 does not exist
Nov 29 06:17:23 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Nov 29 06:17:23 compute-0 podman[81942]: 2025-11-29 06:17:23.133755301 +0000 UTC m=+0.176506680 container init 666e40e078db47c71a223dfd93ef3475ff80af3b9dba5a24eb613fbd75e6ebb5 (image=quay.io/ceph/ceph:v18, name=competent_bell, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 06:17:23 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:17:23 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev b85fcc90-c81a-44bc-a870-abe338067d16 does not exist
Nov 29 06:17:23 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 29 06:17:23 compute-0 podman[81942]: 2025-11-29 06:17:23.147408562 +0000 UTC m=+0.190159941 container start 666e40e078db47c71a223dfd93ef3475ff80af3b9dba5a24eb613fbd75e6ebb5 (image=quay.io/ceph/ceph:v18, name=competent_bell, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Nov 29 06:17:23 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:17:23 compute-0 podman[81942]: 2025-11-29 06:17:23.151099938 +0000 UTC m=+0.193851327 container attach 666e40e078db47c71a223dfd93ef3475ff80af3b9dba5a24eb613fbd75e6ebb5 (image=quay.io/ceph/ceph:v18, name=competent_bell, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 29 06:17:23 compute-0 sudo[81968]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:17:23 compute-0 sudo[81968]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:17:23 compute-0 sudo[81968]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:23 compute-0 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-crash-compute-0[81952]: INFO:ceph-crash:pinging cluster to exercise our key
Nov 29 06:17:23 compute-0 sudo[81993]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 06:17:23 compute-0 sudo[81993]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:17:23 compute-0 sudo[81993]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:23 compute-0 sudo[82020]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:17:23 compute-0 sudo[82020]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:17:23 compute-0 sudo[82020]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:23 compute-0 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-crash-compute-0[81952]: 2025-11-29T06:17:23.427+0000 7f9014f1f640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Nov 29 06:17:23 compute-0 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-crash-compute-0[81952]: 2025-11-29T06:17:23.427+0000 7f9014f1f640 -1 AuthRegistry(0x7f9010066fe0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Nov 29 06:17:23 compute-0 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-crash-compute-0[81952]: 2025-11-29T06:17:23.429+0000 7f9014f1f640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Nov 29 06:17:23 compute-0 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-crash-compute-0[81952]: 2025-11-29T06:17:23.429+0000 7f9014f1f640 -1 AuthRegistry(0x7f9014f1e000) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Nov 29 06:17:23 compute-0 ceph-mon[74654]: pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 06:17:23 compute-0 ceph-mon[74654]: from='client.? 192.168.122.100:0/819257723' entity='client.admin' 
Nov 29 06:17:23 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:17:23 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:17:23 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:17:23 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:17:23 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:17:23 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:17:23 compute-0 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-crash-compute-0[81952]: 2025-11-29T06:17:23.430+0000 7f900e575640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Nov 29 06:17:23 compute-0 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-crash-compute-0[81952]: 2025-11-29T06:17:23.430+0000 7f9014f1f640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Nov 29 06:17:23 compute-0 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-crash-compute-0[81952]: [errno 13] RADOS permission denied (error connecting to the cluster)
Nov 29 06:17:23 compute-0 sudo[82045]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:17:23 compute-0 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-crash-compute-0[81952]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Nov 29 06:17:23 compute-0 sudo[82045]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:17:23 compute-0 sudo[82045]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:23 compute-0 sudo[82081]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:17:23 compute-0 sudo[82081]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:17:23 compute-0 sudo[82081]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:23 compute-0 sudo[82124]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 29 06:17:23 compute-0 sudo[82124]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:17:23 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mon_cluster_log_to_file}] v 0) v1
Nov 29 06:17:23 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1374985863' entity='client.admin' 
Nov 29 06:17:23 compute-0 systemd[1]: libpod-666e40e078db47c71a223dfd93ef3475ff80af3b9dba5a24eb613fbd75e6ebb5.scope: Deactivated successfully.
Nov 29 06:17:23 compute-0 podman[81942]: 2025-11-29 06:17:23.695715968 +0000 UTC m=+0.738467347 container died 666e40e078db47c71a223dfd93ef3475ff80af3b9dba5a24eb613fbd75e6ebb5 (image=quay.io/ceph/ceph:v18, name=competent_bell, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 29 06:17:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-a369139f81ea6fbe78ae6367bffd71959dbcd1b53b42e225e33b891bf2ca560e-merged.mount: Deactivated successfully.
Nov 29 06:17:23 compute-0 podman[81942]: 2025-11-29 06:17:23.765214476 +0000 UTC m=+0.807965855 container remove 666e40e078db47c71a223dfd93ef3475ff80af3b9dba5a24eb613fbd75e6ebb5 (image=quay.io/ceph/ceph:v18, name=competent_bell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 06:17:23 compute-0 systemd[1]: libpod-conmon-666e40e078db47c71a223dfd93ef3475ff80af3b9dba5a24eb613fbd75e6ebb5.scope: Deactivated successfully.
Nov 29 06:17:23 compute-0 sudo[81915]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:23 compute-0 sudo[82240]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hovylcgrnwbinntokmxafyiscbqpscua ; /usr/bin/python3'
Nov 29 06:17:24 compute-0 sudo[82240]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:17:24 compute-0 python3[82245]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd set-require-min-compat-client mimic
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:17:24 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 06:17:24 compute-0 ceph-mgr[74948]: [progress INFO root] Writing back 1 completed events
Nov 29 06:17:24 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 29 06:17:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:17:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:17:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:17:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:17:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:17:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:17:24 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:17:25 compute-0 podman[82260]: 2025-11-29 06:17:25.013073082 +0000 UTC m=+0.943095888 container exec c3c8680245c67f710ba1b448e2d4c77c4c02bc368d31276f0332ad942957e3cf (image=quay.io/ceph/ceph:v18, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mon-compute-0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 06:17:25 compute-0 ceph-mon[74654]: from='client.? 192.168.122.100:0/1374985863' entity='client.admin' 
Nov 29 06:17:25 compute-0 ceph-mon[74654]: pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 06:17:25 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:17:25 compute-0 podman[82291]: 2025-11-29 06:17:25.192806455 +0000 UTC m=+0.050910248 container exec_died c3c8680245c67f710ba1b448e2d4c77c4c02bc368d31276f0332ad942957e3cf (image=quay.io/ceph/ceph:v18, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 06:17:25 compute-0 podman[82260]: 2025-11-29 06:17:25.303652296 +0000 UTC m=+1.233675082 container exec_died c3c8680245c67f710ba1b448e2d4c77c4c02bc368d31276f0332ad942957e3cf (image=quay.io/ceph/ceph:v18, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mon-compute-0, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 29 06:17:25 compute-0 podman[82274]: 2025-11-29 06:17:25.739627389 +0000 UTC m=+1.559071852 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 06:17:25 compute-0 podman[82274]: 2025-11-29 06:17:25.949624457 +0000 UTC m=+1.769068860 container create 2b211d6a70b5cead536e9ec9cf62aa134bc225d74ec2e640376a0f844b76d278 (image=quay.io/ceph/ceph:v18, name=vigorous_ritchie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 29 06:17:26 compute-0 systemd[1]: Started libpod-conmon-2b211d6a70b5cead536e9ec9cf62aa134bc225d74ec2e640376a0f844b76d278.scope.
Nov 29 06:17:26 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:17:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc04187c7913311045295d4fa09d51b4417e7afc62b75932475e4fa15fb355b9/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:17:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc04187c7913311045295d4fa09d51b4417e7afc62b75932475e4fa15fb355b9/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:17:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc04187c7913311045295d4fa09d51b4417e7afc62b75932475e4fa15fb355b9/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 29 06:17:26 compute-0 podman[82274]: 2025-11-29 06:17:26.0933926 +0000 UTC m=+1.912836993 container init 2b211d6a70b5cead536e9ec9cf62aa134bc225d74ec2e640376a0f844b76d278 (image=quay.io/ceph/ceph:v18, name=vigorous_ritchie, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True)
Nov 29 06:17:26 compute-0 podman[82274]: 2025-11-29 06:17:26.104370224 +0000 UTC m=+1.923814617 container start 2b211d6a70b5cead536e9ec9cf62aa134bc225d74ec2e640376a0f844b76d278 (image=quay.io/ceph/ceph:v18, name=vigorous_ritchie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 29 06:17:26 compute-0 podman[82274]: 2025-11-29 06:17:26.109089019 +0000 UTC m=+1.928533402 container attach 2b211d6a70b5cead536e9ec9cf62aa134bc225d74ec2e640376a0f844b76d278 (image=quay.io/ceph/ceph:v18, name=vigorous_ritchie, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 06:17:26 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 06:17:26 compute-0 sudo[82124]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:26 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 06:17:26 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:17:26 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 06:17:26 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:17:26 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 06:17:26 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:17:26 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 06:17:26 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 06:17:26 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 06:17:26 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:17:26 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev 2a90c3e7-96d8-42a4-8b91-497db68b192f does not exist
Nov 29 06:17:26 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev 7d20de59-577a-4895-871d-4672919657d5 does not exist
Nov 29 06:17:26 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev b51b2c4f-1bf1-49c1-982c-e7a6536a294c does not exist
Nov 29 06:17:26 compute-0 sudo[82348]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:17:26 compute-0 sudo[82348]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:17:26 compute-0 sudo[82348]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:26 compute-0 sudo[82373]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 06:17:26 compute-0 sudo[82373]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:17:26 compute-0 sudo[82373]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:26 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_user}] v 0) v1
Nov 29 06:17:26 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:17:26 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_password}] v 0) v1
Nov 29 06:17:26 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:17:26 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_user}] v 0) v1
Nov 29 06:17:26 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:17:26 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_password}] v 0) v1
Nov 29 06:17:26 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:17:26 compute-0 ceph-mgr[74948]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (unknown last config time)...
Nov 29 06:17:26 compute-0 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (unknown last config time)...
Nov 29 06:17:26 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
Nov 29 06:17:26 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 29 06:17:26 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) v1
Nov 29 06:17:26 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Nov 29 06:17:26 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 06:17:26 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:17:26 compute-0 ceph-mgr[74948]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Nov 29 06:17:26 compute-0 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Nov 29 06:17:26 compute-0 sudo[82398]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:17:26 compute-0 sudo[82398]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:17:26 compute-0 sudo[82398]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:26 compute-0 sudo[82423]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:17:26 compute-0 sudo[82423]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:17:26 compute-0 sudo[82423]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:26 compute-0 sudo[82467]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:17:26 compute-0 sudo[82467]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:17:26 compute-0 sudo[82467]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:26 compute-0 sudo[82492]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047
Nov 29 06:17:26 compute-0 sudo[82492]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:17:26 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd set-require-min-compat-client", "version": "mimic"} v 0) v1
Nov 29 06:17:26 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1722203810' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Nov 29 06:17:26 compute-0 podman[82534]: 2025-11-29 06:17:26.873402824 +0000 UTC m=+0.046787259 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:17:26 compute-0 podman[82534]: 2025-11-29 06:17:26.991727549 +0000 UTC m=+0.165111904 container create 0ddf0084db5c8cb47d2db0e7d7cd21884384bb3306f4336251c534d0f9f0a2ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_colden, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 06:17:27 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 06:17:27 compute-0 systemd[1]: Started libpod-conmon-0ddf0084db5c8cb47d2db0e7d7cd21884384bb3306f4336251c534d0f9f0a2ce.scope.
Nov 29 06:17:27 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:17:27 compute-0 podman[82534]: 2025-11-29 06:17:27.157404549 +0000 UTC m=+0.330788974 container init 0ddf0084db5c8cb47d2db0e7d7cd21884384bb3306f4336251c534d0f9f0a2ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_colden, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 29 06:17:27 compute-0 podman[82534]: 2025-11-29 06:17:27.167803737 +0000 UTC m=+0.341188062 container start 0ddf0084db5c8cb47d2db0e7d7cd21884384bb3306f4336251c534d0f9f0a2ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_colden, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 06:17:27 compute-0 podman[82534]: 2025-11-29 06:17:27.171288366 +0000 UTC m=+0.344672721 container attach 0ddf0084db5c8cb47d2db0e7d7cd21884384bb3306f4336251c534d0f9f0a2ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_colden, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 29 06:17:27 compute-0 peaceful_colden[82550]: 167 167
Nov 29 06:17:27 compute-0 systemd[1]: libpod-0ddf0084db5c8cb47d2db0e7d7cd21884384bb3306f4336251c534d0f9f0a2ce.scope: Deactivated successfully.
Nov 29 06:17:27 compute-0 conmon[82550]: conmon 0ddf0084db5c8cb47d2d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0ddf0084db5c8cb47d2db0e7d7cd21884384bb3306f4336251c534d0f9f0a2ce.scope/container/memory.events
Nov 29 06:17:27 compute-0 podman[82534]: 2025-11-29 06:17:27.17667642 +0000 UTC m=+0.350060745 container died 0ddf0084db5c8cb47d2db0e7d7cd21884384bb3306f4336251c534d0f9f0a2ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_colden, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 29 06:17:27 compute-0 ceph-mon[74654]: pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 06:17:27 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:17:27 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:17:27 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:17:27 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 06:17:27 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:17:27 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:17:27 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:17:27 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:17:27 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:17:27 compute-0 ceph-mon[74654]: Reconfiguring mon.compute-0 (unknown last config time)...
Nov 29 06:17:27 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 29 06:17:27 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Nov 29 06:17:27 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:17:27 compute-0 ceph-mon[74654]: Reconfiguring daemon mon.compute-0 on compute-0
Nov 29 06:17:27 compute-0 ceph-mon[74654]: from='client.? 192.168.122.100:0/1722203810' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Nov 29 06:17:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-64c37785d6cba884bc91263eafba1a2b1d284a11aed8fa68b9cd17bd90f32406-merged.mount: Deactivated successfully.
Nov 29 06:17:27 compute-0 podman[82534]: 2025-11-29 06:17:27.322945465 +0000 UTC m=+0.496329830 container remove 0ddf0084db5c8cb47d2db0e7d7cd21884384bb3306f4336251c534d0f9f0a2ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_colden, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 06:17:27 compute-0 systemd[1]: libpod-conmon-0ddf0084db5c8cb47d2db0e7d7cd21884384bb3306f4336251c534d0f9f0a2ce.scope: Deactivated successfully.
Nov 29 06:17:27 compute-0 sudo[82492]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:27 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e2 do_prune osdmap full prune enabled
Nov 29 06:17:27 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e2 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 29 06:17:27 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 06:17:27 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1722203810' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Nov 29 06:17:27 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e3 e3: 0 total, 0 up, 0 in
Nov 29 06:17:27 compute-0 vigorous_ritchie[82328]: set require_min_compat_client to mimic
Nov 29 06:17:27 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e3: 0 total, 0 up, 0 in
Nov 29 06:17:27 compute-0 systemd[1]: libpod-2b211d6a70b5cead536e9ec9cf62aa134bc225d74ec2e640376a0f844b76d278.scope: Deactivated successfully.
Nov 29 06:17:27 compute-0 podman[82274]: 2025-11-29 06:17:27.49825115 +0000 UTC m=+3.317695553 container died 2b211d6a70b5cead536e9ec9cf62aa134bc225d74ec2e640376a0f844b76d278 (image=quay.io/ceph/ceph:v18, name=vigorous_ritchie, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 06:17:27 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:17:27 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 06:17:27 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:17:27 compute-0 ceph-mgr[74948]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.vxabpq (unknown last config time)...
Nov 29 06:17:27 compute-0 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.vxabpq (unknown last config time)...
Nov 29 06:17:27 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.vxabpq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Nov 29 06:17:27 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.vxabpq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 29 06:17:27 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 29 06:17:27 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 29 06:17:27 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 06:17:27 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:17:27 compute-0 ceph-mgr[74948]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.vxabpq on compute-0
Nov 29 06:17:27 compute-0 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.vxabpq on compute-0
Nov 29 06:17:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-cc04187c7913311045295d4fa09d51b4417e7afc62b75932475e4fa15fb355b9-merged.mount: Deactivated successfully.
Nov 29 06:17:27 compute-0 sudo[82584]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:17:27 compute-0 sudo[82584]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:17:27 compute-0 sudo[82584]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:27 compute-0 podman[82274]: 2025-11-29 06:17:27.676947262 +0000 UTC m=+3.496391665 container remove 2b211d6a70b5cead536e9ec9cf62aa134bc225d74ec2e640376a0f844b76d278 (image=quay.io/ceph/ceph:v18, name=vigorous_ritchie, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True)
Nov 29 06:17:27 compute-0 sudo[82609]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:17:27 compute-0 sudo[82609]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:17:27 compute-0 systemd[1]: libpod-conmon-2b211d6a70b5cead536e9ec9cf62aa134bc225d74ec2e640376a0f844b76d278.scope: Deactivated successfully.
Nov 29 06:17:27 compute-0 sudo[82609]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:27 compute-0 sudo[82240]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:27 compute-0 sudo[82634]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:17:27 compute-0 sudo[82634]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:17:27 compute-0 sudo[82634]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:27 compute-0 sudo[82659]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047
Nov 29 06:17:27 compute-0 sudo[82659]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:17:28 compute-0 podman[82700]: 2025-11-29 06:17:28.070223103 +0000 UTC m=+0.028278640 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:17:28 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 06:17:28 compute-0 sudo[82737]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kzwyodmzsrvpvmnvlsihknfhsmxilwla ; /usr/bin/python3'
Nov 29 06:17:28 compute-0 sudo[82737]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:17:28 compute-0 podman[82700]: 2025-11-29 06:17:28.283080162 +0000 UTC m=+0.241135689 container create 0f8ad11c85257f7f49a56bf7aa5375307fb590632731ff0d5fb253eccbab8351 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_boyd, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 06:17:28 compute-0 python3[82739]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:17:28 compute-0 podman[82740]: 2025-11-29 06:17:28.478197923 +0000 UTC m=+0.071838346 container create 843ce1b7fa03d5a5531d19fd0974b6b93e4eaf366c0d477d18c0e8566b0749a0 (image=quay.io/ceph/ceph:v18, name=brave_johnson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 06:17:28 compute-0 ceph-mon[74654]: from='client.? 192.168.122.100:0/1722203810' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Nov 29 06:17:28 compute-0 ceph-mon[74654]: osdmap e3: 0 total, 0 up, 0 in
Nov 29 06:17:28 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:17:28 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:17:28 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.vxabpq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 29 06:17:28 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 29 06:17:28 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:17:28 compute-0 systemd[1]: Started libpod-conmon-0f8ad11c85257f7f49a56bf7aa5375307fb590632731ff0d5fb253eccbab8351.scope.
Nov 29 06:17:28 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:17:28 compute-0 systemd[1]: Started libpod-conmon-843ce1b7fa03d5a5531d19fd0974b6b93e4eaf366c0d477d18c0e8566b0749a0.scope.
Nov 29 06:17:28 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:17:28 compute-0 podman[82700]: 2025-11-29 06:17:28.516817148 +0000 UTC m=+0.474872645 container init 0f8ad11c85257f7f49a56bf7aa5375307fb590632731ff0d5fb253eccbab8351 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_boyd, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 29 06:17:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8afff819ccf962ab91c47989d0b25c6726c02b43b0cfa1950c3d4822a2ed720b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:17:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8afff819ccf962ab91c47989d0b25c6726c02b43b0cfa1950c3d4822a2ed720b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:17:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8afff819ccf962ab91c47989d0b25c6726c02b43b0cfa1950c3d4822a2ed720b/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 29 06:17:28 compute-0 podman[82700]: 2025-11-29 06:17:28.527383791 +0000 UTC m=+0.485439278 container start 0f8ad11c85257f7f49a56bf7aa5375307fb590632731ff0d5fb253eccbab8351 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_boyd, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 29 06:17:28 compute-0 podman[82700]: 2025-11-29 06:17:28.531165759 +0000 UTC m=+0.489221246 container attach 0f8ad11c85257f7f49a56bf7aa5375307fb590632731ff0d5fb253eccbab8351 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_boyd, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 06:17:28 compute-0 gifted_boyd[82755]: 167 167
Nov 29 06:17:28 compute-0 systemd[1]: libpod-0f8ad11c85257f7f49a56bf7aa5375307fb590632731ff0d5fb253eccbab8351.scope: Deactivated successfully.
Nov 29 06:17:28 compute-0 podman[82740]: 2025-11-29 06:17:28.537595463 +0000 UTC m=+0.131235876 container init 843ce1b7fa03d5a5531d19fd0974b6b93e4eaf366c0d477d18c0e8566b0749a0 (image=quay.io/ceph/ceph:v18, name=brave_johnson, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 06:17:28 compute-0 podman[82700]: 2025-11-29 06:17:28.538239341 +0000 UTC m=+0.496294838 container died 0f8ad11c85257f7f49a56bf7aa5375307fb590632731ff0d5fb253eccbab8351 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_boyd, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 06:17:28 compute-0 podman[82740]: 2025-11-29 06:17:28.544553582 +0000 UTC m=+0.138193975 container start 843ce1b7fa03d5a5531d19fd0974b6b93e4eaf366c0d477d18c0e8566b0749a0 (image=quay.io/ceph/ceph:v18, name=brave_johnson, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 06:17:28 compute-0 podman[82740]: 2025-11-29 06:17:28.548536136 +0000 UTC m=+0.142176539 container attach 843ce1b7fa03d5a5531d19fd0974b6b93e4eaf366c0d477d18c0e8566b0749a0 (image=quay.io/ceph/ceph:v18, name=brave_johnson, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 06:17:28 compute-0 podman[82740]: 2025-11-29 06:17:28.459527699 +0000 UTC m=+0.053168132 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 06:17:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-726a5ca468b5929d0b9b3132c401666fd51895821a89d3b1046dfb810e9cb90b-merged.mount: Deactivated successfully.
Nov 29 06:17:28 compute-0 podman[82700]: 2025-11-29 06:17:28.585263326 +0000 UTC m=+0.543318823 container remove 0f8ad11c85257f7f49a56bf7aa5375307fb590632731ff0d5fb253eccbab8351 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_boyd, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 06:17:28 compute-0 systemd[1]: libpod-conmon-0f8ad11c85257f7f49a56bf7aa5375307fb590632731ff0d5fb253eccbab8351.scope: Deactivated successfully.
Nov 29 06:17:28 compute-0 sudo[82659]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:28 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 06:17:28 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:17:28 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 06:17:28 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:17:28 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 06:17:28 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:17:28 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 06:17:28 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 06:17:28 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 06:17:28 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:17:28 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev de7708f6-109f-4bda-9c2f-6a3e09336563 does not exist
Nov 29 06:17:28 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev 702a2450-fbe1-48bb-9dff-cf9969313aac does not exist
Nov 29 06:17:28 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev 3e257dda-5644-4265-b1e3-a3f205625295 does not exist
Nov 29 06:17:28 compute-0 sudo[82779]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:17:28 compute-0 sudo[82779]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:17:28 compute-0 sudo[82779]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:28 compute-0 sudo[82804]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 06:17:28 compute-0 sudo[82804]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:17:28 compute-0 sudo[82804]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:29 compute-0 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.14184 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 06:17:29 compute-0 sudo[82849]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:17:29 compute-0 sudo[82849]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:17:29 compute-0 sudo[82849]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:29 compute-0 sudo[82874]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:17:29 compute-0 sudo[82874]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:17:29 compute-0 sudo[82874]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:29 compute-0 sudo[82899]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:17:29 compute-0 sudo[82899]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:17:29 compute-0 sudo[82899]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:29 compute-0 sudo[82924]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host --expect-hostname compute-0
Nov 29 06:17:29 compute-0 sudo[82924]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:17:29 compute-0 ceph-mon[74654]: Reconfiguring mgr.compute-0.vxabpq (unknown last config time)...
Nov 29 06:17:29 compute-0 ceph-mon[74654]: Reconfiguring daemon mgr.compute-0.vxabpq on compute-0
Nov 29 06:17:29 compute-0 ceph-mon[74654]: pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 06:17:29 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:17:29 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:17:29 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:17:29 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 06:17:29 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:17:29 compute-0 sudo[82924]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:29 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 29 06:17:29 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:17:29 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 29 06:17:29 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:17:29 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 29 06:17:29 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:17:29 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 29 06:17:29 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:17:29 compute-0 ceph-mgr[74948]: [cephadm INFO root] Added host compute-0
Nov 29 06:17:29 compute-0 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Added host compute-0
Nov 29 06:17:29 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 06:17:29 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:17:29 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 06:17:29 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 06:17:29 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 06:17:29 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:17:29 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev 5e86a151-7046-48f1-af39-40904819a436 does not exist
Nov 29 06:17:29 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev d4f47224-617f-4fcc-b3e6-e8c67fd29605 does not exist
Nov 29 06:17:29 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev 48f46211-437f-4c2a-a39a-e7a00041c86b does not exist
Nov 29 06:17:29 compute-0 sudo[82969]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:17:29 compute-0 sudo[82969]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:17:29 compute-0 sudo[82969]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:29 compute-0 sudo[82994]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 06:17:29 compute-0 sudo[82994]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:17:29 compute-0 sudo[82994]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:30 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 06:17:30 compute-0 ceph-mon[74654]: from='client.14184 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 06:17:30 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:17:30 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:17:30 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:17:30 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:17:30 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:17:30 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 06:17:30 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:17:31 compute-0 ceph-mgr[74948]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-1
Nov 29 06:17:31 compute-0 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-1
Nov 29 06:17:31 compute-0 ceph-mon[74654]: Added host compute-0
Nov 29 06:17:31 compute-0 ceph-mon[74654]: pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 06:17:32 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 06:17:32 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 06:17:32 compute-0 ceph-mon[74654]: Deploying cephadm binary to compute-1
Nov 29 06:17:34 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 06:17:34 compute-0 ceph-mon[74654]: pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 06:17:35 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 29 06:17:35 compute-0 ceph-mon[74654]: pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 06:17:35 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:17:35 compute-0 ceph-mgr[74948]: [cephadm INFO root] Added host compute-1
Nov 29 06:17:35 compute-0 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Added host compute-1
Nov 29 06:17:36 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 06:17:36 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:17:36 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 06:17:36 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 06:17:36 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:17:36 compute-0 ceph-mon[74654]: Added host compute-1
Nov 29 06:17:36 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:17:36 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:17:36 compute-0 ceph-mgr[74948]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-2
Nov 29 06:17:36 compute-0 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-2
Nov 29 06:17:37 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 06:17:37 compute-0 ceph-mon[74654]: pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 06:17:37 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:17:37 compute-0 ceph-mon[74654]: Deploying cephadm binary to compute-2
Nov 29 06:17:38 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 06:17:38 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 06:17:38 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:17:39 compute-0 ceph-mon[74654]: pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 06:17:39 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:17:40 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 06:17:40 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 29 06:17:40 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:17:40 compute-0 ceph-mgr[74948]: [cephadm INFO root] Added host compute-2
Nov 29 06:17:40 compute-0 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Added host compute-2
Nov 29 06:17:40 compute-0 ceph-mgr[74948]: [cephadm INFO root] Saving service mon spec with placement compute-0;compute-1;compute-2
Nov 29 06:17:40 compute-0 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Saving service mon spec with placement compute-0;compute-1;compute-2
Nov 29 06:17:40 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Nov 29 06:17:40 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:17:40 compute-0 ceph-mgr[74948]: [cephadm INFO root] Saving service mgr spec with placement compute-0;compute-1;compute-2
Nov 29 06:17:40 compute-0 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement compute-0;compute-1;compute-2
Nov 29 06:17:40 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 29 06:17:40 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:17:40 compute-0 ceph-mgr[74948]: [cephadm INFO root] Marking host: compute-0 for OSDSpec preview refresh.
Nov 29 06:17:40 compute-0 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Marking host: compute-0 for OSDSpec preview refresh.
Nov 29 06:17:40 compute-0 ceph-mgr[74948]: [cephadm INFO root] Marking host: compute-1 for OSDSpec preview refresh.
Nov 29 06:17:40 compute-0 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Marking host: compute-1 for OSDSpec preview refresh.
Nov 29 06:17:40 compute-0 ceph-mgr[74948]: [cephadm INFO root] Saving service osd.default_drive_group spec with placement compute-0;compute-1;compute-2
Nov 29 06:17:40 compute-0 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Saving service osd.default_drive_group spec with placement compute-0;compute-1;compute-2
Nov 29 06:17:40 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.osd.default_drive_group}] v 0) v1
Nov 29 06:17:41 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:17:41 compute-0 brave_johnson[82760]: Added host 'compute-0' with addr '192.168.122.100'
Nov 29 06:17:41 compute-0 brave_johnson[82760]: Added host 'compute-1' with addr '192.168.122.101'
Nov 29 06:17:41 compute-0 brave_johnson[82760]: Added host 'compute-2' with addr '192.168.122.102'
Nov 29 06:17:41 compute-0 brave_johnson[82760]: Scheduled mon update...
Nov 29 06:17:41 compute-0 brave_johnson[82760]: Scheduled mgr update...
Nov 29 06:17:41 compute-0 brave_johnson[82760]: Scheduled osd.default_drive_group update...
Nov 29 06:17:41 compute-0 systemd[1]: libpod-843ce1b7fa03d5a5531d19fd0974b6b93e4eaf366c0d477d18c0e8566b0749a0.scope: Deactivated successfully.
Nov 29 06:17:41 compute-0 podman[82740]: 2025-11-29 06:17:41.060564298 +0000 UTC m=+12.654204711 container died 843ce1b7fa03d5a5531d19fd0974b6b93e4eaf366c0d477d18c0e8566b0749a0 (image=quay.io/ceph/ceph:v18, name=brave_johnson, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default)
Nov 29 06:17:41 compute-0 ceph-mon[74654]: pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 06:17:41 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:17:41 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:17:41 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:17:41 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:17:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-8afff819ccf962ab91c47989d0b25c6726c02b43b0cfa1950c3d4822a2ed720b-merged.mount: Deactivated successfully.
Nov 29 06:17:42 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 06:17:42 compute-0 podman[82740]: 2025-11-29 06:17:42.088201766 +0000 UTC m=+13.681842209 container remove 843ce1b7fa03d5a5531d19fd0974b6b93e4eaf366c0d477d18c0e8566b0749a0 (image=quay.io/ceph/ceph:v18, name=brave_johnson, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 06:17:42 compute-0 sudo[82737]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:42 compute-0 systemd[1]: libpod-conmon-843ce1b7fa03d5a5531d19fd0974b6b93e4eaf366c0d477d18c0e8566b0749a0.scope: Deactivated successfully.
Nov 29 06:17:42 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 06:17:42 compute-0 sudo[83055]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xwnzcmivytsizusdnpqsxjfhnykpprbt ; /usr/bin/python3'
Nov 29 06:17:42 compute-0 sudo[83055]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:17:42 compute-0 ceph-mon[74654]: Added host compute-2
Nov 29 06:17:42 compute-0 ceph-mon[74654]: Saving service mon spec with placement compute-0;compute-1;compute-2
Nov 29 06:17:42 compute-0 ceph-mon[74654]: Saving service mgr spec with placement compute-0;compute-1;compute-2
Nov 29 06:17:42 compute-0 ceph-mon[74654]: Marking host: compute-0 for OSDSpec preview refresh.
Nov 29 06:17:42 compute-0 ceph-mon[74654]: Marking host: compute-1 for OSDSpec preview refresh.
Nov 29 06:17:42 compute-0 ceph-mon[74654]: Saving service osd.default_drive_group spec with placement compute-0;compute-1;compute-2
Nov 29 06:17:42 compute-0 python3[83057]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:17:42 compute-0 podman[83059]: 2025-11-29 06:17:42.635164802 +0000 UTC m=+0.043825995 container create 112c5cb33929500c687ed117716500100a91458f4acd34851bc53aaed076c5a6 (image=quay.io/ceph/ceph:v18, name=vigilant_mendeleev, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True)
Nov 29 06:17:42 compute-0 systemd[1]: Started libpod-conmon-112c5cb33929500c687ed117716500100a91458f4acd34851bc53aaed076c5a6.scope.
Nov 29 06:17:42 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:17:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93dc71a3d60360a6d61cb5ab2d2efbab6b6e11340cf3fd827d8fec18ff7483f3/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:17:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93dc71a3d60360a6d61cb5ab2d2efbab6b6e11340cf3fd827d8fec18ff7483f3/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 29 06:17:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93dc71a3d60360a6d61cb5ab2d2efbab6b6e11340cf3fd827d8fec18ff7483f3/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:17:42 compute-0 podman[83059]: 2025-11-29 06:17:42.616844078 +0000 UTC m=+0.025505361 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 06:17:42 compute-0 podman[83059]: 2025-11-29 06:17:42.721729208 +0000 UTC m=+0.130390431 container init 112c5cb33929500c687ed117716500100a91458f4acd34851bc53aaed076c5a6 (image=quay.io/ceph/ceph:v18, name=vigilant_mendeleev, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 29 06:17:42 compute-0 podman[83059]: 2025-11-29 06:17:42.73228119 +0000 UTC m=+0.140942383 container start 112c5cb33929500c687ed117716500100a91458f4acd34851bc53aaed076c5a6 (image=quay.io/ceph/ceph:v18, name=vigilant_mendeleev, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 06:17:42 compute-0 podman[83059]: 2025-11-29 06:17:42.735790831 +0000 UTC m=+0.144452024 container attach 112c5cb33929500c687ed117716500100a91458f4acd34851bc53aaed076c5a6 (image=quay.io/ceph/ceph:v18, name=vigilant_mendeleev, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 29 06:17:43 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Nov 29 06:17:43 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/178888563' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 29 06:17:43 compute-0 vigilant_mendeleev[83075]: 
Nov 29 06:17:43 compute-0 vigilant_mendeleev[83075]: {"fsid":"336ec58c-893b-528f-a0c1-6ed1196bc047","health":{"status":"HEALTH_WARN","checks":{"TOO_FEW_OSDS":{"severity":"HEALTH_WARN","summary":{"message":"OSD count 0 < osd_pool_default_size 1","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":96,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":3,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":1,"modified":"2025-11-29T06:16:03.952029+0000","services":{}},"progress_events":{}}
Nov 29 06:17:43 compute-0 systemd[1]: libpod-112c5cb33929500c687ed117716500100a91458f4acd34851bc53aaed076c5a6.scope: Deactivated successfully.
Nov 29 06:17:43 compute-0 podman[83059]: 2025-11-29 06:17:43.363921921 +0000 UTC m=+0.772583124 container died 112c5cb33929500c687ed117716500100a91458f4acd34851bc53aaed076c5a6 (image=quay.io/ceph/ceph:v18, name=vigilant_mendeleev, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 29 06:17:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-93dc71a3d60360a6d61cb5ab2d2efbab6b6e11340cf3fd827d8fec18ff7483f3-merged.mount: Deactivated successfully.
Nov 29 06:17:43 compute-0 podman[83059]: 2025-11-29 06:17:43.423435174 +0000 UTC m=+0.832096367 container remove 112c5cb33929500c687ed117716500100a91458f4acd34851bc53aaed076c5a6 (image=quay.io/ceph/ceph:v18, name=vigilant_mendeleev, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 29 06:17:43 compute-0 systemd[1]: libpod-conmon-112c5cb33929500c687ed117716500100a91458f4acd34851bc53aaed076c5a6.scope: Deactivated successfully.
Nov 29 06:17:43 compute-0 sudo[83055]: pam_unix(sudo:session): session closed for user root
Nov 29 06:17:43 compute-0 ceph-mon[74654]: pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 06:17:43 compute-0 ceph-mon[74654]: from='client.? 192.168.122.100:0/178888563' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 29 06:17:44 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 06:17:45 compute-0 ceph-mon[74654]: pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 06:17:46 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 06:17:46 compute-0 ceph-mon[74654]: pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 06:17:46 compute-0 sshd-session[83112]: Invalid user tempuser from 31.6.212.12 port 52602
Nov 29 06:17:47 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 06:17:47 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:17:47 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Nov 29 06:17:47 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 29 06:17:47 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 06:17:47 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:17:47 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 06:17:47 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 06:17:47 compute-0 ceph-mgr[74948]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Nov 29 06:17:47 compute-0 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Nov 29 06:17:47 compute-0 sshd-session[83112]: Received disconnect from 31.6.212.12 port 52602:11: Bye Bye [preauth]
Nov 29 06:17:47 compute-0 sshd-session[83112]: Disconnected from invalid user tempuser 31.6.212.12 port 52602 [preauth]
Nov 29 06:17:47 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 06:17:48 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:17:48 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 29 06:17:48 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:17:48 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 06:17:48 compute-0 ceph-mon[74654]: Updating compute-1:/etc/ceph/ceph.conf
Nov 29 06:17:48 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 06:17:48 compute-0 ceph-mgr[74948]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/config/ceph.conf
Nov 29 06:17:48 compute-0 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/config/ceph.conf
Nov 29 06:17:49 compute-0 ceph-mon[74654]: pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 06:17:49 compute-0 ceph-mon[74654]: Updating compute-1:/var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/config/ceph.conf
Nov 29 06:17:49 compute-0 ceph-mgr[74948]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Nov 29 06:17:49 compute-0 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Nov 29 06:17:50 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 06:17:50 compute-0 ceph-mgr[74948]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/config/ceph.client.admin.keyring
Nov 29 06:17:50 compute-0 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/config/ceph.client.admin.keyring
Nov 29 06:17:51 compute-0 ceph-mon[74654]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Nov 29 06:17:51 compute-0 ceph-mon[74654]: pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 06:17:51 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 06:17:51 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:17:51 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 06:17:51 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 06:17:51 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:17:51 compute-0 ceph-mgr[74948]: [cephadm ERROR cephadm.serve] Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon
                                           service_name: mon
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Nov 29 06:17:51 compute-0 ceph-mgr[74948]: log_channel(cephadm) log [ERR] : Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon
                                           service_name: mon
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Nov 29 06:17:51 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 06:17:51 compute-0 ceph-mgr[74948]: [cephadm ERROR cephadm.serve] Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr
                                           service_name: mgr
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Nov 29 06:17:51 compute-0 ceph-mgr[74948]: log_channel(cephadm) log [ERR] : Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr
                                           service_name: mgr
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Nov 29 06:17:51 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 06:17:51 compute-0 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: 2025-11-29T06:17:51.919+0000 7f90e34d8640 -1 log_channel(cephadm) log [ERR] : Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon
Nov 29 06:17:51 compute-0 ceph-mgr[74948]: [progress INFO root] update: starting ev f16fc35c-a5e4-431b-90d1-3bb309788cfc (Updating crash deployment (+1 -> 2))
Nov 29 06:17:51 compute-0 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: service_name: mon
Nov 29 06:17:51 compute-0 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: placement:
Nov 29 06:17:51 compute-0 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]:   hosts:
Nov 29 06:17:51 compute-0 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]:   - compute-0
Nov 29 06:17:51 compute-0 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]:   - compute-1
Nov 29 06:17:51 compute-0 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]:   - compute-2
Nov 29 06:17:51 compute-0 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: ''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Nov 29 06:17:51 compute-0 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: 2025-11-29T06:17:51.920+0000 7f90e34d8640 -1 log_channel(cephadm) log [ERR] : Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr
Nov 29 06:17:51 compute-0 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: service_name: mgr
Nov 29 06:17:51 compute-0 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: placement:
Nov 29 06:17:51 compute-0 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]:   hosts:
Nov 29 06:17:51 compute-0 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]:   - compute-0
Nov 29 06:17:51 compute-0 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]:   - compute-1
Nov 29 06:17:51 compute-0 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]:   - compute-2
Nov 29 06:17:51 compute-0 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: ''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Nov 29 06:17:51 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) v1
Nov 29 06:17:51 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 29 06:17:51 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Nov 29 06:17:51 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 06:17:51 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:17:51 compute-0 ceph-mgr[74948]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-1 on compute-1
Nov 29 06:17:51 compute-0 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-1 on compute-1
Nov 29 06:17:52 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 06:17:52 compute-0 ceph-mon[74654]: Updating compute-1:/var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/config/ceph.client.admin.keyring
Nov 29 06:17:52 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:17:52 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:17:52 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 29 06:17:52 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Nov 29 06:17:52 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:17:52 compute-0 ceph-mon[74654]: log_channel(cluster) log [WRN] : Health check failed: Failed to apply 2 service(s): mon,mgr (CEPHADM_APPLY_SPEC_FAIL)
Nov 29 06:17:52 compute-0 ceph-mon[74654]: log_channel(cluster) log [WRN] : Health check failed: failed to probe daemons or devices (CEPHADM_REFRESH_FAILED)
Nov 29 06:17:53 compute-0 ceph-mon[74654]: pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 06:17:53 compute-0 ceph-mon[74654]: Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon
                                           service_name: mon
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Nov 29 06:17:53 compute-0 ceph-mon[74654]: pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 06:17:53 compute-0 ceph-mon[74654]: Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr
                                           service_name: mgr
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Nov 29 06:17:53 compute-0 ceph-mon[74654]: pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 06:17:53 compute-0 ceph-mon[74654]: Deploying daemon crash.compute-1 on compute-1
Nov 29 06:17:53 compute-0 ceph-mon[74654]: Health check failed: Failed to apply 2 service(s): mon,mgr (CEPHADM_APPLY_SPEC_FAIL)
Nov 29 06:17:53 compute-0 ceph-mon[74654]: Health check failed: failed to probe daemons or devices (CEPHADM_REFRESH_FAILED)
Nov 29 06:17:53 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 06:17:54 compute-0 ceph-mgr[74948]: [balancer INFO root] Optimize plan auto_2025-11-29_06:17:54
Nov 29 06:17:54 compute-0 ceph-mgr[74948]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 06:17:54 compute-0 ceph-mgr[74948]: [balancer INFO root] do_upmap
Nov 29 06:17:54 compute-0 ceph-mgr[74948]: [balancer INFO root] No pools available
Nov 29 06:17:54 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 06:17:54 compute-0 ceph-mgr[74948]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 06:17:54 compute-0 ceph-mgr[74948]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 06:17:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:17:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:17:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:17:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:17:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:17:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:17:54 compute-0 ceph-mon[74654]: pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 06:17:55 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 06:17:56 compute-0 ceph-mon[74654]: pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 06:17:57 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 06:17:57 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 06:17:59 compute-0 ceph-mon[74654]: pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 06:17:59 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 06:18:00 compute-0 sshd-session[83114]: Received disconnect from 104.208.108.166 port 5672:11: Bye Bye [preauth]
Nov 29 06:18:00 compute-0 sshd-session[83114]: Disconnected from authenticating user root 104.208.108.166 port 5672 [preauth]
Nov 29 06:18:01 compute-0 ceph-mon[74654]: pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 06:18:01 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 06:18:02 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 06:18:03 compute-0 ceph-mon[74654]: pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 06:18:03 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 06:18:05 compute-0 ceph-mon[74654]: pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 06:18:05 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 06:18:07 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 06:18:07 compute-0 ceph-mon[74654]: pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 06:18:07 compute-0 sshd-session[83116]: Invalid user bodega from 103.147.159.91 port 52346
Nov 29 06:18:07 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 06:18:08 compute-0 sshd-session[83118]: Invalid user smart from 79.116.35.29 port 51668
Nov 29 06:18:08 compute-0 sshd-session[83116]: Received disconnect from 103.147.159.91 port 52346:11: Bye Bye [preauth]
Nov 29 06:18:08 compute-0 sshd-session[83116]: Disconnected from invalid user bodega 103.147.159.91 port 52346 [preauth]
Nov 29 06:18:08 compute-0 sshd-session[83118]: Received disconnect from 79.116.35.29 port 51668:11: Bye Bye [preauth]
Nov 29 06:18:08 compute-0 sshd-session[83118]: Disconnected from invalid user smart 79.116.35.29 port 51668 [preauth]
Nov 29 06:18:09 compute-0 ceph-mon[74654]: pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 06:18:09 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 06:18:11 compute-0 ceph-mon[74654]: pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 06:18:11 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 06:18:12 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 06:18:13 compute-0 sudo[83143]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rnmfwurnzwwcpfebixbekxdgzpyfxgyx ; /usr/bin/python3'
Nov 29 06:18:13 compute-0 sudo[83143]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:18:13 compute-0 python3[83145]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:18:13 compute-0 ceph-mon[74654]: pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 06:18:13 compute-0 podman[83147]: 2025-11-29 06:18:13.807445403 +0000 UTC m=+0.030228245 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 06:18:13 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v36: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 06:18:13 compute-0 podman[83147]: 2025-11-29 06:18:13.945802381 +0000 UTC m=+0.168585233 container create 0817939557ffdcf6ceb36b59eeda86567114c485f3a0318a389fed57f0deb920 (image=quay.io/ceph/ceph:v18, name=brave_franklin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 06:18:14 compute-0 systemd[1]: Started libpod-conmon-0817939557ffdcf6ceb36b59eeda86567114c485f3a0318a389fed57f0deb920.scope.
Nov 29 06:18:14 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:18:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c067d85274d6afe5e5c58727161c96113ab0c4c627ebc38d2b99589bb51e1d6c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:18:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c067d85274d6afe5e5c58727161c96113ab0c4c627ebc38d2b99589bb51e1d6c/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 29 06:18:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c067d85274d6afe5e5c58727161c96113ab0c4c627ebc38d2b99589bb51e1d6c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:18:14 compute-0 podman[83147]: 2025-11-29 06:18:14.130507985 +0000 UTC m=+0.353290867 container init 0817939557ffdcf6ceb36b59eeda86567114c485f3a0318a389fed57f0deb920 (image=quay.io/ceph/ceph:v18, name=brave_franklin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 06:18:14 compute-0 podman[83147]: 2025-11-29 06:18:14.141986994 +0000 UTC m=+0.364769846 container start 0817939557ffdcf6ceb36b59eeda86567114c485f3a0318a389fed57f0deb920 (image=quay.io/ceph/ceph:v18, name=brave_franklin, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 06:18:14 compute-0 podman[83147]: 2025-11-29 06:18:14.186774265 +0000 UTC m=+0.409557167 container attach 0817939557ffdcf6ceb36b59eeda86567114c485f3a0318a389fed57f0deb920 (image=quay.io/ceph/ceph:v18, name=brave_franklin, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 29 06:18:14 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Nov 29 06:18:14 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2430457078' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 29 06:18:14 compute-0 brave_franklin[83163]: 
Nov 29 06:18:14 compute-0 brave_franklin[83163]: {"fsid":"336ec58c-893b-528f-a0c1-6ed1196bc047","health":{"status":"HEALTH_WARN","checks":{"CEPHADM_APPLY_SPEC_FAIL":{"severity":"HEALTH_WARN","summary":{"message":"Failed to apply 2 service(s): mon,mgr","count":2},"muted":false},"CEPHADM_REFRESH_FAILED":{"severity":"HEALTH_WARN","summary":{"message":"failed to probe daemons or devices","count":1},"muted":false},"TOO_FEW_OSDS":{"severity":"HEALTH_WARN","summary":{"message":"OSD count 0 < osd_pool_default_size 1","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":127,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":3,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-11-29T06:17:55.922038+0000","services":{}},"progress_events":{"f16fc35c-a5e4-431b-90d1-3bb309788cfc":{"message":"Updating crash deployment (+1 -> 2) (0s)\n      [............................] ","progress":0,"add_to_ceph_s":true}}}
Nov 29 06:18:14 compute-0 systemd[1]: libpod-0817939557ffdcf6ceb36b59eeda86567114c485f3a0318a389fed57f0deb920.scope: Deactivated successfully.
Nov 29 06:18:14 compute-0 podman[83147]: 2025-11-29 06:18:14.772950293 +0000 UTC m=+0.995733155 container died 0817939557ffdcf6ceb36b59eeda86567114c485f3a0318a389fed57f0deb920 (image=quay.io/ceph/ceph:v18, name=brave_franklin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 29 06:18:15 compute-0 ceph-mon[74654]: pgmap v36: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 06:18:15 compute-0 ceph-mon[74654]: from='client.? 192.168.122.100:0/2430457078' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 29 06:18:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-c067d85274d6afe5e5c58727161c96113ab0c4c627ebc38d2b99589bb51e1d6c-merged.mount: Deactivated successfully.
Nov 29 06:18:15 compute-0 podman[83147]: 2025-11-29 06:18:15.131753628 +0000 UTC m=+1.354536440 container remove 0817939557ffdcf6ceb36b59eeda86567114c485f3a0318a389fed57f0deb920 (image=quay.io/ceph/ceph:v18, name=brave_franklin, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 06:18:15 compute-0 systemd[1]: libpod-conmon-0817939557ffdcf6ceb36b59eeda86567114c485f3a0318a389fed57f0deb920.scope: Deactivated successfully.
Nov 29 06:18:15 compute-0 sudo[83143]: pam_unix(sudo:session): session closed for user root
Nov 29 06:18:15 compute-0 sshd-session[83201]: Received disconnect from 138.124.186.225 port 54652:11: Bye Bye [preauth]
Nov 29 06:18:15 compute-0 sshd-session[83201]: Disconnected from authenticating user root 138.124.186.225 port 54652 [preauth]
Nov 29 06:18:15 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v37: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 06:18:17 compute-0 ceph-mon[74654]: pgmap v37: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 06:18:17 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 06:18:17 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v38: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 06:18:18 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 06:18:18 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:18:18 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Nov 29 06:18:18 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:18:18 compute-0 ceph-mgr[74948]: [progress INFO root] complete: finished ev f16fc35c-a5e4-431b-90d1-3bb309788cfc (Updating crash deployment (+1 -> 2))
Nov 29 06:18:18 compute-0 ceph-mgr[74948]: [progress INFO root] Completed event f16fc35c-a5e4-431b-90d1-3bb309788cfc (Updating crash deployment (+1 -> 2)) in 27 seconds
Nov 29 06:18:18 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Nov 29 06:18:18 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:18:18 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 06:18:18 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 06:18:18 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 06:18:18 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 06:18:18 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 06:18:18 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:18:18 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 06:18:18 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 06:18:18 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 06:18:18 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:18:18 compute-0 sudo[83203]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:18:18 compute-0 sudo[83203]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:18:18 compute-0 sudo[83203]: pam_unix(sudo:session): session closed for user root
Nov 29 06:18:18 compute-0 sudo[83228]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:18:18 compute-0 sudo[83228]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:18:18 compute-0 sudo[83228]: pam_unix(sudo:session): session closed for user root
Nov 29 06:18:18 compute-0 sudo[83253]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:18:18 compute-0 sudo[83253]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:18:18 compute-0 sudo[83253]: pam_unix(sudo:session): session closed for user root
Nov 29 06:18:18 compute-0 sudo[83278]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Nov 29 06:18:18 compute-0 sudo[83278]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:18:19 compute-0 podman[83345]: 2025-11-29 06:18:19.225661456 +0000 UTC m=+0.046521742 container create 1a984c0485914b571ea9ac20c7f56b84c07c1b4eabd1c0d103d47e2c65f8c07a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_franklin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Nov 29 06:18:19 compute-0 systemd[1]: Started libpod-conmon-1a984c0485914b571ea9ac20c7f56b84c07c1b4eabd1c0d103d47e2c65f8c07a.scope.
Nov 29 06:18:19 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:18:19 compute-0 podman[83345]: 2025-11-29 06:18:19.302337149 +0000 UTC m=+0.123197485 container init 1a984c0485914b571ea9ac20c7f56b84c07c1b4eabd1c0d103d47e2c65f8c07a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_franklin, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 06:18:19 compute-0 podman[83345]: 2025-11-29 06:18:19.207614499 +0000 UTC m=+0.028474795 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:18:19 compute-0 podman[83345]: 2025-11-29 06:18:19.308405513 +0000 UTC m=+0.129265789 container start 1a984c0485914b571ea9ac20c7f56b84c07c1b4eabd1c0d103d47e2c65f8c07a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_franklin, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 06:18:19 compute-0 nostalgic_franklin[83361]: 167 167
Nov 29 06:18:19 compute-0 podman[83345]: 2025-11-29 06:18:19.313749886 +0000 UTC m=+0.134610192 container attach 1a984c0485914b571ea9ac20c7f56b84c07c1b4eabd1c0d103d47e2c65f8c07a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_franklin, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 29 06:18:19 compute-0 systemd[1]: libpod-1a984c0485914b571ea9ac20c7f56b84c07c1b4eabd1c0d103d47e2c65f8c07a.scope: Deactivated successfully.
Nov 29 06:18:19 compute-0 conmon[83361]: conmon 1a984c0485914b571ea9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1a984c0485914b571ea9ac20c7f56b84c07c1b4eabd1c0d103d47e2c65f8c07a.scope/container/memory.events
Nov 29 06:18:19 compute-0 podman[83345]: 2025-11-29 06:18:19.315822465 +0000 UTC m=+0.136682771 container died 1a984c0485914b571ea9ac20c7f56b84c07c1b4eabd1c0d103d47e2c65f8c07a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_franklin, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 29 06:18:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-258edc8726749ce9b7d7f47169eceb443e9c39dfdeaf3300311f8b586fd373a8-merged.mount: Deactivated successfully.
Nov 29 06:18:19 compute-0 podman[83345]: 2025-11-29 06:18:19.363232001 +0000 UTC m=+0.184092278 container remove 1a984c0485914b571ea9ac20c7f56b84c07c1b4eabd1c0d103d47e2c65f8c07a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_franklin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 06:18:19 compute-0 systemd[1]: libpod-conmon-1a984c0485914b571ea9ac20c7f56b84c07c1b4eabd1c0d103d47e2c65f8c07a.scope: Deactivated successfully.
Nov 29 06:18:19 compute-0 ceph-mon[74654]: pgmap v38: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 06:18:19 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:18:19 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:18:19 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:18:19 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 06:18:19 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 06:18:19 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:18:19 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 06:18:19 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:18:19 compute-0 podman[83384]: 2025-11-29 06:18:19.576837272 +0000 UTC m=+0.058216026 container create e9fbb3ae787c537f03ff324a7045322127b7e0f1019400a3b6a5f20adfbe357e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_agnesi, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 06:18:19 compute-0 ceph-mgr[74948]: [progress INFO root] Writing back 2 completed events
Nov 29 06:18:19 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 29 06:18:19 compute-0 systemd[1]: Started libpod-conmon-e9fbb3ae787c537f03ff324a7045322127b7e0f1019400a3b6a5f20adfbe357e.scope.
Nov 29 06:18:19 compute-0 podman[83384]: 2025-11-29 06:18:19.554277277 +0000 UTC m=+0.035656021 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:18:19 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:18:19 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:18:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83c24a33548acac4d6657140a88e91a0edea3a133219e0cb170bd19604ea3b72/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 06:18:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83c24a33548acac4d6657140a88e91a0edea3a133219e0cb170bd19604ea3b72/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:18:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83c24a33548acac4d6657140a88e91a0edea3a133219e0cb170bd19604ea3b72/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:18:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83c24a33548acac4d6657140a88e91a0edea3a133219e0cb170bd19604ea3b72/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 06:18:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83c24a33548acac4d6657140a88e91a0edea3a133219e0cb170bd19604ea3b72/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 06:18:19 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v39: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 06:18:19 compute-0 podman[83384]: 2025-11-29 06:18:19.941028171 +0000 UTC m=+0.422406935 container init e9fbb3ae787c537f03ff324a7045322127b7e0f1019400a3b6a5f20adfbe357e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_agnesi, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 29 06:18:19 compute-0 podman[83384]: 2025-11-29 06:18:19.958206583 +0000 UTC m=+0.439585297 container start e9fbb3ae787c537f03ff324a7045322127b7e0f1019400a3b6a5f20adfbe357e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_agnesi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 29 06:18:19 compute-0 podman[83384]: 2025-11-29 06:18:19.96195738 +0000 UTC m=+0.443336154 container attach e9fbb3ae787c537f03ff324a7045322127b7e0f1019400a3b6a5f20adfbe357e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_agnesi, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 29 06:18:20 compute-0 bold_agnesi[83401]: --> passed data devices: 0 physical, 1 LVM
Nov 29 06:18:20 compute-0 bold_agnesi[83401]: --> relative data size: 1.0
Nov 29 06:18:20 compute-0 bold_agnesi[83401]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 29 06:18:20 compute-0 bold_agnesi[83401]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 91f280f1-e534-4adc-bf70-98711580c2dd
Nov 29 06:18:20 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "f793b967-de22-4105-bb0d-c91464bf150f"} v 0) v1
Nov 29 06:18:20 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/321313974' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "f793b967-de22-4105-bb0d-c91464bf150f"}]: dispatch
Nov 29 06:18:20 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e3 do_prune osdmap full prune enabled
Nov 29 06:18:20 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e3 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 29 06:18:20 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/321313974' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "f793b967-de22-4105-bb0d-c91464bf150f"}]': finished
Nov 29 06:18:20 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e4 e4: 1 total, 0 up, 1 in
Nov 29 06:18:20 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e4: 1 total, 0 up, 1 in
Nov 29 06:18:20 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 29 06:18:20 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 06:18:20 compute-0 ceph-mgr[74948]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 29 06:18:20 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:18:20 compute-0 ceph-mon[74654]: pgmap v39: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 06:18:20 compute-0 ceph-mon[74654]: from='client.? 192.168.122.101:0/321313974' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "f793b967-de22-4105-bb0d-c91464bf150f"}]: dispatch
Nov 29 06:18:20 compute-0 ceph-mon[74654]: from='client.? 192.168.122.101:0/321313974' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "f793b967-de22-4105-bb0d-c91464bf150f"}]': finished
Nov 29 06:18:20 compute-0 ceph-mon[74654]: osdmap e4: 1 total, 0 up, 1 in
Nov 29 06:18:20 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 06:18:21 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "91f280f1-e534-4adc-bf70-98711580c2dd"} v 0) v1
Nov 29 06:18:21 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3026959268' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "91f280f1-e534-4adc-bf70-98711580c2dd"}]: dispatch
Nov 29 06:18:21 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e4 do_prune osdmap full prune enabled
Nov 29 06:18:21 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e4 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 29 06:18:21 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3026959268' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "91f280f1-e534-4adc-bf70-98711580c2dd"}]': finished
Nov 29 06:18:21 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e5 e5: 2 total, 0 up, 2 in
Nov 29 06:18:21 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e5: 2 total, 0 up, 2 in
Nov 29 06:18:21 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 29 06:18:21 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 06:18:21 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 29 06:18:21 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 06:18:21 compute-0 ceph-mgr[74948]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 29 06:18:21 compute-0 ceph-mgr[74948]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 29 06:18:21 compute-0 bold_agnesi[83401]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 29 06:18:21 compute-0 lvm[83448]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 29 06:18:21 compute-0 lvm[83448]: VG ceph_vg0 finished
Nov 29 06:18:21 compute-0 bold_agnesi[83401]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-1
Nov 29 06:18:21 compute-0 bold_agnesi[83401]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Nov 29 06:18:21 compute-0 bold_agnesi[83401]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 29 06:18:21 compute-0 bold_agnesi[83401]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Nov 29 06:18:21 compute-0 bold_agnesi[83401]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-1/activate.monmap
Nov 29 06:18:21 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Nov 29 06:18:21 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4241004139' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Nov 29 06:18:21 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Nov 29 06:18:21 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4020978526' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Nov 29 06:18:21 compute-0 bold_agnesi[83401]:  stderr: got monmap epoch 1
Nov 29 06:18:21 compute-0 bold_agnesi[83401]: --> Creating keyring file for osd.1
Nov 29 06:18:21 compute-0 bold_agnesi[83401]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/keyring
Nov 29 06:18:21 compute-0 bold_agnesi[83401]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/
Nov 29 06:18:21 compute-0 bold_agnesi[83401]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-1/ --osd-uuid 91f280f1-e534-4adc-bf70-98711580c2dd --setuser ceph --setgroup ceph
Nov 29 06:18:21 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v42: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 06:18:21 compute-0 ceph-mon[74654]: from='client.? 192.168.122.100:0/3026959268' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "91f280f1-e534-4adc-bf70-98711580c2dd"}]: dispatch
Nov 29 06:18:21 compute-0 ceph-mon[74654]: from='client.? 192.168.122.100:0/3026959268' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "91f280f1-e534-4adc-bf70-98711580c2dd"}]': finished
Nov 29 06:18:21 compute-0 ceph-mon[74654]: osdmap e5: 2 total, 0 up, 2 in
Nov 29 06:18:21 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 06:18:21 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 06:18:21 compute-0 ceph-mon[74654]: from='client.? 192.168.122.101:0/4241004139' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Nov 29 06:18:21 compute-0 ceph-mon[74654]: from='client.? 192.168.122.100:0/4020978526' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Nov 29 06:18:22 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 06:18:22 compute-0 ceph-mon[74654]: log_channel(cluster) log [INF] : Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Nov 29 06:18:22 compute-0 ceph-mon[74654]: pgmap v42: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 06:18:22 compute-0 ceph-mon[74654]: Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Nov 29 06:18:23 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v43: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 06:18:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:18:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:18:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:18:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:18:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:18:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:18:24 compute-0 bold_agnesi[83401]:  stderr: 2025-11-29T06:18:21.872+0000 7f19b5968740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 29 06:18:24 compute-0 bold_agnesi[83401]:  stderr: 2025-11-29T06:18:21.872+0000 7f19b5968740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 29 06:18:24 compute-0 bold_agnesi[83401]:  stderr: 2025-11-29T06:18:21.872+0000 7f19b5968740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 29 06:18:24 compute-0 bold_agnesi[83401]:  stderr: 2025-11-29T06:18:21.872+0000 7f19b5968740 -1 bluestore(/var/lib/ceph/osd/ceph-1/) _read_fsid unparsable uuid
Nov 29 06:18:24 compute-0 bold_agnesi[83401]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Nov 29 06:18:24 compute-0 bold_agnesi[83401]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 29 06:18:24 compute-0 bold_agnesi[83401]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Nov 29 06:18:24 compute-0 bold_agnesi[83401]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Nov 29 06:18:24 compute-0 bold_agnesi[83401]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Nov 29 06:18:24 compute-0 bold_agnesi[83401]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 29 06:18:24 compute-0 bold_agnesi[83401]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 29 06:18:24 compute-0 bold_agnesi[83401]: --> ceph-volume lvm activate successful for osd ID: 1
Nov 29 06:18:24 compute-0 bold_agnesi[83401]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Nov 29 06:18:24 compute-0 systemd[1]: libpod-e9fbb3ae787c537f03ff324a7045322127b7e0f1019400a3b6a5f20adfbe357e.scope: Deactivated successfully.
Nov 29 06:18:24 compute-0 podman[83384]: 2025-11-29 06:18:24.509352573 +0000 UTC m=+4.990731297 container died e9fbb3ae787c537f03ff324a7045322127b7e0f1019400a3b6a5f20adfbe357e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_agnesi, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 06:18:24 compute-0 systemd[1]: libpod-e9fbb3ae787c537f03ff324a7045322127b7e0f1019400a3b6a5f20adfbe357e.scope: Consumed 2.643s CPU time.
Nov 29 06:18:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-83c24a33548acac4d6657140a88e91a0edea3a133219e0cb170bd19604ea3b72-merged.mount: Deactivated successfully.
Nov 29 06:18:24 compute-0 podman[83384]: 2025-11-29 06:18:24.571755558 +0000 UTC m=+5.053134282 container remove e9fbb3ae787c537f03ff324a7045322127b7e0f1019400a3b6a5f20adfbe357e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_agnesi, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 06:18:24 compute-0 systemd[1]: libpod-conmon-e9fbb3ae787c537f03ff324a7045322127b7e0f1019400a3b6a5f20adfbe357e.scope: Deactivated successfully.
Nov 29 06:18:24 compute-0 sudo[83278]: pam_unix(sudo:session): session closed for user root
Nov 29 06:18:24 compute-0 sudo[84370]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:18:24 compute-0 sudo[84370]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:18:24 compute-0 sudo[84370]: pam_unix(sudo:session): session closed for user root
Nov 29 06:18:24 compute-0 sudo[84396]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:18:24 compute-0 sudo[84396]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:18:24 compute-0 sudo[84396]: pam_unix(sudo:session): session closed for user root
Nov 29 06:18:24 compute-0 sudo[84421]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:18:24 compute-0 sudo[84421]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:18:24 compute-0 sudo[84421]: pam_unix(sudo:session): session closed for user root
Nov 29 06:18:24 compute-0 sudo[84446]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -- lvm list --format json
Nov 29 06:18:24 compute-0 sudo[84446]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:18:24 compute-0 ceph-mon[74654]: pgmap v43: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 06:18:25 compute-0 podman[84510]: 2025-11-29 06:18:25.308804172 +0000 UTC m=+0.046512084 container create 9f80c9c7e5e4c73a87fd22b442caa9eee65c4575dbc0f7a21840dc6e2e7046cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_sutherland, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 06:18:25 compute-0 systemd[1]: Started libpod-conmon-9f80c9c7e5e4c73a87fd22b442caa9eee65c4575dbc0f7a21840dc6e2e7046cb.scope.
Nov 29 06:18:25 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:18:25 compute-0 podman[84510]: 2025-11-29 06:18:25.28518993 +0000 UTC m=+0.022897882 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:18:25 compute-0 podman[84510]: 2025-11-29 06:18:25.398809322 +0000 UTC m=+0.136517274 container init 9f80c9c7e5e4c73a87fd22b442caa9eee65c4575dbc0f7a21840dc6e2e7046cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_sutherland, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 06:18:25 compute-0 podman[84510]: 2025-11-29 06:18:25.408076476 +0000 UTC m=+0.145784428 container start 9f80c9c7e5e4c73a87fd22b442caa9eee65c4575dbc0f7a21840dc6e2e7046cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_sutherland, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 06:18:25 compute-0 podman[84510]: 2025-11-29 06:18:25.412197803 +0000 UTC m=+0.149905755 container attach 9f80c9c7e5e4c73a87fd22b442caa9eee65c4575dbc0f7a21840dc6e2e7046cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_sutherland, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 06:18:25 compute-0 zen_sutherland[84526]: 167 167
Nov 29 06:18:25 compute-0 systemd[1]: libpod-9f80c9c7e5e4c73a87fd22b442caa9eee65c4575dbc0f7a21840dc6e2e7046cb.scope: Deactivated successfully.
Nov 29 06:18:25 compute-0 podman[84510]: 2025-11-29 06:18:25.416599338 +0000 UTC m=+0.154307290 container died 9f80c9c7e5e4c73a87fd22b442caa9eee65c4575dbc0f7a21840dc6e2e7046cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_sutherland, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 29 06:18:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-3ad404d718baa356979e3155b41b328385fe80f6fadc23fd0c21d915b8252e81-merged.mount: Deactivated successfully.
Nov 29 06:18:25 compute-0 podman[84510]: 2025-11-29 06:18:25.473888326 +0000 UTC m=+0.211596238 container remove 9f80c9c7e5e4c73a87fd22b442caa9eee65c4575dbc0f7a21840dc6e2e7046cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_sutherland, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 06:18:25 compute-0 systemd[1]: libpod-conmon-9f80c9c7e5e4c73a87fd22b442caa9eee65c4575dbc0f7a21840dc6e2e7046cb.scope: Deactivated successfully.
Nov 29 06:18:25 compute-0 podman[84550]: 2025-11-29 06:18:25.661657967 +0000 UTC m=+0.042625853 container create bb63cde4609f1ef0b77e6d91a3df6e336e174aebca6ff0cccd6d714549a04248 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_turing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 29 06:18:25 compute-0 systemd[1]: Started libpod-conmon-bb63cde4609f1ef0b77e6d91a3df6e336e174aebca6ff0cccd6d714549a04248.scope.
Nov 29 06:18:25 compute-0 podman[84550]: 2025-11-29 06:18:25.641909365 +0000 UTC m=+0.022877251 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:18:25 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:18:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e607c02a3ea82ca9e855f220261cb971a055ffe64259d62ac96e47173d13b6f9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 06:18:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e607c02a3ea82ca9e855f220261cb971a055ffe64259d62ac96e47173d13b6f9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:18:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e607c02a3ea82ca9e855f220261cb971a055ffe64259d62ac96e47173d13b6f9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:18:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e607c02a3ea82ca9e855f220261cb971a055ffe64259d62ac96e47173d13b6f9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 06:18:25 compute-0 podman[84550]: 2025-11-29 06:18:25.774183038 +0000 UTC m=+0.155150984 container init bb63cde4609f1ef0b77e6d91a3df6e336e174aebca6ff0cccd6d714549a04248 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_turing, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 06:18:25 compute-0 podman[84550]: 2025-11-29 06:18:25.78691396 +0000 UTC m=+0.167881846 container start bb63cde4609f1ef0b77e6d91a3df6e336e174aebca6ff0cccd6d714549a04248 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_turing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 06:18:25 compute-0 podman[84550]: 2025-11-29 06:18:25.791311425 +0000 UTC m=+0.172279321 container attach bb63cde4609f1ef0b77e6d91a3df6e336e174aebca6ff0cccd6d714549a04248 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_turing, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3)
Nov 29 06:18:25 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v44: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 06:18:26 compute-0 relaxed_turing[84566]: {
Nov 29 06:18:26 compute-0 relaxed_turing[84566]:     "1": [
Nov 29 06:18:26 compute-0 relaxed_turing[84566]:         {
Nov 29 06:18:26 compute-0 relaxed_turing[84566]:             "devices": [
Nov 29 06:18:26 compute-0 relaxed_turing[84566]:                 "/dev/loop3"
Nov 29 06:18:26 compute-0 relaxed_turing[84566]:             ],
Nov 29 06:18:26 compute-0 relaxed_turing[84566]:             "lv_name": "ceph_lv0",
Nov 29 06:18:26 compute-0 relaxed_turing[84566]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 06:18:26 compute-0 relaxed_turing[84566]:             "lv_size": "7511998464",
Nov 29 06:18:26 compute-0 relaxed_turing[84566]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=336ec58c-893b-528f-a0c1-6ed1196bc047,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=91f280f1-e534-4adc-bf70-98711580c2dd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 06:18:26 compute-0 relaxed_turing[84566]:             "lv_uuid": "G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP",
Nov 29 06:18:26 compute-0 relaxed_turing[84566]:             "name": "ceph_lv0",
Nov 29 06:18:26 compute-0 relaxed_turing[84566]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 06:18:26 compute-0 relaxed_turing[84566]:             "tags": {
Nov 29 06:18:26 compute-0 relaxed_turing[84566]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 06:18:26 compute-0 relaxed_turing[84566]:                 "ceph.block_uuid": "G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP",
Nov 29 06:18:26 compute-0 relaxed_turing[84566]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 06:18:26 compute-0 relaxed_turing[84566]:                 "ceph.cluster_fsid": "336ec58c-893b-528f-a0c1-6ed1196bc047",
Nov 29 06:18:26 compute-0 relaxed_turing[84566]:                 "ceph.cluster_name": "ceph",
Nov 29 06:18:26 compute-0 relaxed_turing[84566]:                 "ceph.crush_device_class": "",
Nov 29 06:18:26 compute-0 relaxed_turing[84566]:                 "ceph.encrypted": "0",
Nov 29 06:18:26 compute-0 relaxed_turing[84566]:                 "ceph.osd_fsid": "91f280f1-e534-4adc-bf70-98711580c2dd",
Nov 29 06:18:26 compute-0 relaxed_turing[84566]:                 "ceph.osd_id": "1",
Nov 29 06:18:26 compute-0 relaxed_turing[84566]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 06:18:26 compute-0 relaxed_turing[84566]:                 "ceph.type": "block",
Nov 29 06:18:26 compute-0 relaxed_turing[84566]:                 "ceph.vdo": "0"
Nov 29 06:18:26 compute-0 relaxed_turing[84566]:             },
Nov 29 06:18:26 compute-0 relaxed_turing[84566]:             "type": "block",
Nov 29 06:18:26 compute-0 relaxed_turing[84566]:             "vg_name": "ceph_vg0"
Nov 29 06:18:26 compute-0 relaxed_turing[84566]:         }
Nov 29 06:18:26 compute-0 relaxed_turing[84566]:     ]
Nov 29 06:18:26 compute-0 relaxed_turing[84566]: }
Nov 29 06:18:26 compute-0 systemd[1]: libpod-bb63cde4609f1ef0b77e6d91a3df6e336e174aebca6ff0cccd6d714549a04248.scope: Deactivated successfully.
Nov 29 06:18:26 compute-0 podman[84550]: 2025-11-29 06:18:26.618589575 +0000 UTC m=+0.999557441 container died bb63cde4609f1ef0b77e6d91a3df6e336e174aebca6ff0cccd6d714549a04248 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_turing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 06:18:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-e607c02a3ea82ca9e855f220261cb971a055ffe64259d62ac96e47173d13b6f9-merged.mount: Deactivated successfully.
Nov 29 06:18:26 compute-0 podman[84550]: 2025-11-29 06:18:26.678061047 +0000 UTC m=+1.059028893 container remove bb63cde4609f1ef0b77e6d91a3df6e336e174aebca6ff0cccd6d714549a04248 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_turing, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 06:18:26 compute-0 systemd[1]: libpod-conmon-bb63cde4609f1ef0b77e6d91a3df6e336e174aebca6ff0cccd6d714549a04248.scope: Deactivated successfully.
Nov 29 06:18:26 compute-0 sudo[84446]: pam_unix(sudo:session): session closed for user root
Nov 29 06:18:26 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0) v1
Nov 29 06:18:26 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Nov 29 06:18:26 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 06:18:26 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:18:26 compute-0 ceph-mgr[74948]: [cephadm INFO cephadm.serve] Deploying daemon osd.1 on compute-0
Nov 29 06:18:26 compute-0 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Deploying daemon osd.1 on compute-0
Nov 29 06:18:26 compute-0 sudo[84586]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:18:26 compute-0 sudo[84586]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:18:26 compute-0 sudo[84586]: pam_unix(sudo:session): session closed for user root
Nov 29 06:18:26 compute-0 sudo[84611]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:18:26 compute-0 sudo[84611]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:18:26 compute-0 sudo[84611]: pam_unix(sudo:session): session closed for user root
Nov 29 06:18:26 compute-0 sudo[84636]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:18:26 compute-0 sudo[84636]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:18:26 compute-0 sudo[84636]: pam_unix(sudo:session): session closed for user root
Nov 29 06:18:26 compute-0 ceph-mon[74654]: pgmap v44: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 06:18:26 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Nov 29 06:18:26 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:18:26 compute-0 sudo[84661]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047
Nov 29 06:18:26 compute-0 sudo[84661]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:18:27 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 06:18:27 compute-0 podman[84727]: 2025-11-29 06:18:27.446630947 +0000 UTC m=+0.072125212 container create 2b03d4033753b11e055402783a9f95f9b78ada79eff8daec7739e578874ec7fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_torvalds, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 29 06:18:27 compute-0 systemd[1]: Started libpod-conmon-2b03d4033753b11e055402783a9f95f9b78ada79eff8daec7739e578874ec7fc.scope.
Nov 29 06:18:27 compute-0 podman[84727]: 2025-11-29 06:18:27.415187123 +0000 UTC m=+0.040681468 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:18:27 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:18:27 compute-0 podman[84727]: 2025-11-29 06:18:27.540185868 +0000 UTC m=+0.165680183 container init 2b03d4033753b11e055402783a9f95f9b78ada79eff8daec7739e578874ec7fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_torvalds, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 29 06:18:27 compute-0 podman[84727]: 2025-11-29 06:18:27.547440095 +0000 UTC m=+0.172934380 container start 2b03d4033753b11e055402783a9f95f9b78ada79eff8daec7739e578874ec7fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_torvalds, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 29 06:18:27 compute-0 podman[84727]: 2025-11-29 06:18:27.551565272 +0000 UTC m=+0.177059577 container attach 2b03d4033753b11e055402783a9f95f9b78ada79eff8daec7739e578874ec7fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_torvalds, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 06:18:27 compute-0 jovial_torvalds[84744]: 167 167
Nov 29 06:18:27 compute-0 systemd[1]: libpod-2b03d4033753b11e055402783a9f95f9b78ada79eff8daec7739e578874ec7fc.scope: Deactivated successfully.
Nov 29 06:18:27 compute-0 podman[84727]: 2025-11-29 06:18:27.553013883 +0000 UTC m=+0.178508158 container died 2b03d4033753b11e055402783a9f95f9b78ada79eff8daec7739e578874ec7fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_torvalds, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 29 06:18:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-c7d6257364e1f30e3a821a05f81b3b8ef2e963560be85759ffa2e8a2f758d24f-merged.mount: Deactivated successfully.
Nov 29 06:18:27 compute-0 podman[84727]: 2025-11-29 06:18:27.601503372 +0000 UTC m=+0.226997667 container remove 2b03d4033753b11e055402783a9f95f9b78ada79eff8daec7739e578874ec7fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_torvalds, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 29 06:18:27 compute-0 systemd[1]: libpod-conmon-2b03d4033753b11e055402783a9f95f9b78ada79eff8daec7739e578874ec7fc.scope: Deactivated successfully.
Nov 29 06:18:27 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v45: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 06:18:27 compute-0 podman[84775]: 2025-11-29 06:18:27.967561124 +0000 UTC m=+0.071995308 container create 8655f4b58fd23ddd6c98d267f9d9b5861fd4765b50c10f518742b0d848b14b92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-osd-1-activate-test, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 29 06:18:27 compute-0 ceph-mon[74654]: Deploying daemon osd.1 on compute-0
Nov 29 06:18:28 compute-0 systemd[1]: Started libpod-conmon-8655f4b58fd23ddd6c98d267f9d9b5861fd4765b50c10f518742b0d848b14b92.scope.
Nov 29 06:18:28 compute-0 podman[84775]: 2025-11-29 06:18:27.939225148 +0000 UTC m=+0.043659422 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:18:28 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:18:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41fe15836814a3cdb20fdc65a9f8814f3bcaa2de46b0f00e08a5b8a68cdf3064/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 06:18:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41fe15836814a3cdb20fdc65a9f8814f3bcaa2de46b0f00e08a5b8a68cdf3064/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:18:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41fe15836814a3cdb20fdc65a9f8814f3bcaa2de46b0f00e08a5b8a68cdf3064/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:18:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41fe15836814a3cdb20fdc65a9f8814f3bcaa2de46b0f00e08a5b8a68cdf3064/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 06:18:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41fe15836814a3cdb20fdc65a9f8814f3bcaa2de46b0f00e08a5b8a68cdf3064/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Nov 29 06:18:28 compute-0 podman[84775]: 2025-11-29 06:18:28.06865312 +0000 UTC m=+0.173087334 container init 8655f4b58fd23ddd6c98d267f9d9b5861fd4765b50c10f518742b0d848b14b92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-osd-1-activate-test, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 06:18:28 compute-0 podman[84775]: 2025-11-29 06:18:28.080745524 +0000 UTC m=+0.185179728 container start 8655f4b58fd23ddd6c98d267f9d9b5861fd4765b50c10f518742b0d848b14b92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-osd-1-activate-test, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 06:18:28 compute-0 podman[84775]: 2025-11-29 06:18:28.086297142 +0000 UTC m=+0.190731356 container attach 8655f4b58fd23ddd6c98d267f9d9b5861fd4765b50c10f518742b0d848b14b92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-osd-1-activate-test, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 29 06:18:28 compute-0 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-osd-1-activate-test[84791]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Nov 29 06:18:28 compute-0 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-osd-1-activate-test[84791]:                             [--no-systemd] [--no-tmpfs]
Nov 29 06:18:28 compute-0 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-osd-1-activate-test[84791]: ceph-volume activate: error: unrecognized arguments: --bad-option
Nov 29 06:18:28 compute-0 systemd[1]: libpod-8655f4b58fd23ddd6c98d267f9d9b5861fd4765b50c10f518742b0d848b14b92.scope: Deactivated successfully.
Nov 29 06:18:28 compute-0 podman[84775]: 2025-11-29 06:18:28.771438559 +0000 UTC m=+0.875872743 container died 8655f4b58fd23ddd6c98d267f9d9b5861fd4765b50c10f518742b0d848b14b92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-osd-1-activate-test, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 29 06:18:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-41fe15836814a3cdb20fdc65a9f8814f3bcaa2de46b0f00e08a5b8a68cdf3064-merged.mount: Deactivated successfully.
Nov 29 06:18:28 compute-0 podman[84775]: 2025-11-29 06:18:28.832064284 +0000 UTC m=+0.936498458 container remove 8655f4b58fd23ddd6c98d267f9d9b5861fd4765b50c10f518742b0d848b14b92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-osd-1-activate-test, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 29 06:18:28 compute-0 systemd[1]: libpod-conmon-8655f4b58fd23ddd6c98d267f9d9b5861fd4765b50c10f518742b0d848b14b92.scope: Deactivated successfully.
Nov 29 06:18:29 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0) v1
Nov 29 06:18:29 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Nov 29 06:18:29 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 06:18:29 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:18:29 compute-0 ceph-mgr[74948]: [cephadm INFO cephadm.serve] Deploying daemon osd.0 on compute-1
Nov 29 06:18:29 compute-0 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Deploying daemon osd.0 on compute-1
Nov 29 06:18:29 compute-0 ceph-mon[74654]: pgmap v45: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 06:18:29 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v46: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 06:18:30 compute-0 systemd[1]: Reloading.
Nov 29 06:18:30 compute-0 systemd-sysv-generator[84855]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 06:18:30 compute-0 systemd-rc-local-generator[84850]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 06:18:30 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Nov 29 06:18:30 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:18:30 compute-0 ceph-mon[74654]: Deploying daemon osd.0 on compute-1
Nov 29 06:18:30 compute-0 systemd[1]: Reloading.
Nov 29 06:18:31 compute-0 systemd-rc-local-generator[84894]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 06:18:31 compute-0 systemd-sysv-generator[84898]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 06:18:31 compute-0 systemd[1]: Starting Ceph osd.1 for 336ec58c-893b-528f-a0c1-6ed1196bc047...
Nov 29 06:18:31 compute-0 podman[84951]: 2025-11-29 06:18:31.497119036 +0000 UTC m=+0.037015774 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:18:31 compute-0 podman[84951]: 2025-11-29 06:18:31.639253588 +0000 UTC m=+0.179150266 container create f538f1c6c85cdca6e2ddb94855c3c04aacb93df85bc5224d5ccb4748eb1f85ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-osd-1-activate, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 06:18:31 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:18:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5efa240cd053f2e669665b6e0fe6008c4db5c8af9bec74c4daf98453c37fee64/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 06:18:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5efa240cd053f2e669665b6e0fe6008c4db5c8af9bec74c4daf98453c37fee64/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:18:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5efa240cd053f2e669665b6e0fe6008c4db5c8af9bec74c4daf98453c37fee64/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:18:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5efa240cd053f2e669665b6e0fe6008c4db5c8af9bec74c4daf98453c37fee64/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 06:18:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5efa240cd053f2e669665b6e0fe6008c4db5c8af9bec74c4daf98453c37fee64/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Nov 29 06:18:31 compute-0 podman[84951]: 2025-11-29 06:18:31.736291338 +0000 UTC m=+0.276188006 container init f538f1c6c85cdca6e2ddb94855c3c04aacb93df85bc5224d5ccb4748eb1f85ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-osd-1-activate, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 06:18:31 compute-0 podman[84951]: 2025-11-29 06:18:31.746508849 +0000 UTC m=+0.286405497 container start f538f1c6c85cdca6e2ddb94855c3c04aacb93df85bc5224d5ccb4748eb1f85ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-osd-1-activate, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True)
Nov 29 06:18:31 compute-0 podman[84951]: 2025-11-29 06:18:31.749905986 +0000 UTC m=+0.289802634 container attach f538f1c6c85cdca6e2ddb94855c3c04aacb93df85bc5224d5ccb4748eb1f85ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-osd-1-activate, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 29 06:18:31 compute-0 ceph-mon[74654]: pgmap v46: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 06:18:31 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v47: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 06:18:32 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 06:18:32 compute-0 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-osd-1-activate[84967]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 29 06:18:32 compute-0 bash[84951]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 29 06:18:32 compute-0 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-osd-1-activate[84967]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-1 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Nov 29 06:18:32 compute-0 bash[84951]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-1 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Nov 29 06:18:32 compute-0 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-osd-1-activate[84967]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Nov 29 06:18:32 compute-0 bash[84951]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Nov 29 06:18:32 compute-0 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-osd-1-activate[84967]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 29 06:18:32 compute-0 bash[84951]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 29 06:18:32 compute-0 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-osd-1-activate[84967]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Nov 29 06:18:32 compute-0 bash[84951]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Nov 29 06:18:32 compute-0 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-osd-1-activate[84967]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 29 06:18:32 compute-0 bash[84951]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 29 06:18:32 compute-0 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-osd-1-activate[84967]: --> ceph-volume raw activate successful for osd ID: 1
Nov 29 06:18:32 compute-0 bash[84951]: --> ceph-volume raw activate successful for osd ID: 1
Nov 29 06:18:32 compute-0 systemd[1]: libpod-f538f1c6c85cdca6e2ddb94855c3c04aacb93df85bc5224d5ccb4748eb1f85ea.scope: Deactivated successfully.
Nov 29 06:18:32 compute-0 podman[84951]: 2025-11-29 06:18:32.673421822 +0000 UTC m=+1.213318510 container died f538f1c6c85cdca6e2ddb94855c3c04aacb93df85bc5224d5ccb4748eb1f85ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-osd-1-activate, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 06:18:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-5efa240cd053f2e669665b6e0fe6008c4db5c8af9bec74c4daf98453c37fee64-merged.mount: Deactivated successfully.
Nov 29 06:18:32 compute-0 podman[84951]: 2025-11-29 06:18:32.756265478 +0000 UTC m=+1.296162166 container remove f538f1c6c85cdca6e2ddb94855c3c04aacb93df85bc5224d5ccb4748eb1f85ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-osd-1-activate, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 29 06:18:32 compute-0 ceph-mon[74654]: pgmap v47: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 06:18:33 compute-0 podman[85143]: 2025-11-29 06:18:33.006523107 +0000 UTC m=+0.049323024 container create aaeeb4acbe44bf7cf6d89d4ecc7b9d3bae84881fa82249ee28532bdc419d2e04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-osd-1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True)
Nov 29 06:18:33 compute-0 podman[85143]: 2025-11-29 06:18:32.986070375 +0000 UTC m=+0.028870272 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:18:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eed60abb6f68b642fdddee6bc2862ca1579c2d2ae4c6fec73b78a6ec716d5ae0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 06:18:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eed60abb6f68b642fdddee6bc2862ca1579c2d2ae4c6fec73b78a6ec716d5ae0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:18:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eed60abb6f68b642fdddee6bc2862ca1579c2d2ae4c6fec73b78a6ec716d5ae0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:18:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eed60abb6f68b642fdddee6bc2862ca1579c2d2ae4c6fec73b78a6ec716d5ae0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 06:18:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eed60abb6f68b642fdddee6bc2862ca1579c2d2ae4c6fec73b78a6ec716d5ae0/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Nov 29 06:18:33 compute-0 podman[85143]: 2025-11-29 06:18:33.315066152 +0000 UTC m=+0.357866129 container init aaeeb4acbe44bf7cf6d89d4ecc7b9d3bae84881fa82249ee28532bdc419d2e04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-osd-1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 29 06:18:33 compute-0 podman[85143]: 2025-11-29 06:18:33.324947913 +0000 UTC m=+0.367747870 container start aaeeb4acbe44bf7cf6d89d4ecc7b9d3bae84881fa82249ee28532bdc419d2e04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-osd-1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 29 06:18:33 compute-0 ceph-osd[85162]: set uid:gid to 167:167 (ceph:ceph)
Nov 29 06:18:33 compute-0 ceph-osd[85162]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Nov 29 06:18:33 compute-0 ceph-osd[85162]: pidfile_write: ignore empty --pid-file
Nov 29 06:18:33 compute-0 ceph-osd[85162]: bdev(0x5633efaf9800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 29 06:18:33 compute-0 ceph-osd[85162]: bdev(0x5633efaf9800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 29 06:18:33 compute-0 ceph-osd[85162]: bdev(0x5633efaf9800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 06:18:33 compute-0 ceph-osd[85162]: bdev(0x5633efaf9800 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 06:18:33 compute-0 ceph-osd[85162]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 29 06:18:33 compute-0 ceph-osd[85162]: bdev(0x5633f0931800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 29 06:18:33 compute-0 ceph-osd[85162]: bdev(0x5633f0931800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 29 06:18:33 compute-0 ceph-osd[85162]: bdev(0x5633f0931800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 06:18:33 compute-0 ceph-osd[85162]: bdev(0x5633f0931800 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 06:18:33 compute-0 ceph-osd[85162]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 7.0 GiB
Nov 29 06:18:33 compute-0 ceph-osd[85162]: bdev(0x5633f0931800 /var/lib/ceph/osd/ceph-1/block) close
Nov 29 06:18:33 compute-0 bash[85143]: aaeeb4acbe44bf7cf6d89d4ecc7b9d3bae84881fa82249ee28532bdc419d2e04
Nov 29 06:18:33 compute-0 systemd[1]: Started Ceph osd.1 for 336ec58c-893b-528f-a0c1-6ed1196bc047.
Nov 29 06:18:33 compute-0 ceph-osd[85162]: bdev(0x5633efaf9800 /var/lib/ceph/osd/ceph-1/block) close
Nov 29 06:18:33 compute-0 sudo[84661]: pam_unix(sudo:session): session closed for user root
Nov 29 06:18:33 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 06:18:33 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:18:33 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 06:18:33 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:18:33 compute-0 ceph-osd[85162]: starting osd.1 osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal
Nov 29 06:18:33 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v48: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 06:18:33 compute-0 ceph-osd[85162]: load: jerasure load: lrc 
Nov 29 06:18:33 compute-0 ceph-osd[85162]: bdev(0x5633f09acc00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 29 06:18:33 compute-0 ceph-osd[85162]: bdev(0x5633f09acc00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 29 06:18:33 compute-0 ceph-osd[85162]: bdev(0x5633f09acc00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 06:18:33 compute-0 ceph-osd[85162]: bdev(0x5633f09acc00 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 06:18:33 compute-0 ceph-osd[85162]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 29 06:18:33 compute-0 ceph-osd[85162]: bdev(0x5633f09acc00 /var/lib/ceph/osd/ceph-1/block) close
Nov 29 06:18:33 compute-0 sudo[85177]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:18:33 compute-0 sudo[85177]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:18:33 compute-0 sudo[85177]: pam_unix(sudo:session): session closed for user root
Nov 29 06:18:34 compute-0 sudo[85207]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:18:34 compute-0 sudo[85207]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:18:34 compute-0 sudo[85207]: pam_unix(sudo:session): session closed for user root
Nov 29 06:18:34 compute-0 sudo[85232]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:18:34 compute-0 sudo[85232]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:18:34 compute-0 sudo[85232]: pam_unix(sudo:session): session closed for user root
Nov 29 06:18:34 compute-0 sudo[85257]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -- raw list --format json
Nov 29 06:18:34 compute-0 sudo[85257]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:18:34 compute-0 ceph-osd[85162]: bdev(0x5633f09acc00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 29 06:18:34 compute-0 ceph-osd[85162]: bdev(0x5633f09acc00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 29 06:18:34 compute-0 ceph-osd[85162]: bdev(0x5633f09acc00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 06:18:34 compute-0 ceph-osd[85162]: bdev(0x5633f09acc00 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 06:18:34 compute-0 ceph-osd[85162]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 29 06:18:34 compute-0 ceph-osd[85162]: bdev(0x5633f09acc00 /var/lib/ceph/osd/ceph-1/block) close
Nov 29 06:18:34 compute-0 ceph-osd[85162]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Nov 29 06:18:34 compute-0 ceph-osd[85162]: osd.1:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Nov 29 06:18:34 compute-0 ceph-osd[85162]: bdev(0x5633f09acc00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 29 06:18:34 compute-0 ceph-osd[85162]: bdev(0x5633f09acc00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 29 06:18:34 compute-0 ceph-osd[85162]: bdev(0x5633f09acc00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 06:18:34 compute-0 ceph-osd[85162]: bdev(0x5633f09acc00 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 06:18:34 compute-0 ceph-osd[85162]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 29 06:18:34 compute-0 ceph-osd[85162]: bdev(0x5633f09ad400 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 29 06:18:34 compute-0 ceph-osd[85162]: bdev(0x5633f09ad400 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 29 06:18:34 compute-0 ceph-osd[85162]: bdev(0x5633f09ad400 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 06:18:34 compute-0 ceph-osd[85162]: bdev(0x5633f09ad400 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 06:18:34 compute-0 ceph-osd[85162]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 7.0 GiB
Nov 29 06:18:34 compute-0 ceph-osd[85162]: bluefs mount
Nov 29 06:18:34 compute-0 ceph-osd[85162]: bluefs _init_alloc shared, id 1, capacity 0x1bfc00000, block size 0x10000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: bluefs mount shared_bdev_used = 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,7136398540 db.slow,7136398540
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: RocksDB version: 7.9.2
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Git sha 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: DB SUMMARY
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: DB Session ID:  2QR1MYHZ2PW1Z4CTUV0E
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: CURRENT file:  CURRENT
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: IDENTITY file:  IDENTITY
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                         Options.error_if_exists: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                       Options.create_if_missing: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                         Options.paranoid_checks: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                                     Options.env: 0x5633f0983c70
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                                Options.info_log: 0x5633efb76ba0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.max_file_opening_threads: 16
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                              Options.statistics: (nil)
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                               Options.use_fsync: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                       Options.max_log_file_size: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                         Options.allow_fallocate: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                        Options.use_direct_reads: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:          Options.create_missing_column_families: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                              Options.db_log_dir: 
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                                 Options.wal_dir: db.wal
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                   Options.advise_random_on_open: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                    Options.write_buffer_manager: 0x5633f0a86460
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                            Options.rate_limiter: (nil)
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                  Options.unordered_write: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                               Options.row_cache: None
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                              Options.wal_filter: None
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:             Options.allow_ingest_behind: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:             Options.two_write_queues: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:             Options.manual_wal_flush: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:             Options.wal_compression: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:             Options.atomic_flush: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                 Options.log_readahead_size: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:             Options.allow_data_in_errors: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:             Options.db_host_id: __hostname__
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:             Options.max_background_jobs: 4
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:             Options.max_background_compactions: -1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:             Options.max_subcompactions: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                          Options.max_open_files: -1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                          Options.bytes_per_sync: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                  Options.max_background_flushes: -1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Compression algorithms supported:
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         kZSTD supported: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         kXpressCompression supported: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         kBZip2Compression supported: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         kZSTDNotFinalCompression supported: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         kLZ4Compression supported: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         kZlibCompression supported: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         kLZ4HCCompression supported: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         kSnappyCompression supported: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:        Options.compaction_filter: None
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5633efb76600)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5633efb6cdd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:          Options.compression: LZ4
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:             Options.num_levels: 7
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                           Options.bloom_locality: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                               Options.ttl: 2592000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                       Options.enable_blob_files: false
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                           Options.min_blob_size: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:           Options.merge_operator: None
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:        Options.compaction_filter: None
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5633efb76600)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5633efb6cdd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:          Options.compression: LZ4
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:             Options.num_levels: 7
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                           Options.bloom_locality: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                               Options.ttl: 2592000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                       Options.enable_blob_files: false
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                           Options.min_blob_size: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:           Options.merge_operator: None
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:        Options.compaction_filter: None
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5633efb76600)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5633efb6cdd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:          Options.compression: LZ4
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:             Options.num_levels: 7
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                           Options.bloom_locality: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                               Options.ttl: 2592000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                       Options.enable_blob_files: false
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                           Options.min_blob_size: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:           Options.merge_operator: None
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:        Options.compaction_filter: None
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5633efb76600)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5633efb6cdd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:          Options.compression: LZ4
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:             Options.num_levels: 7
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                           Options.bloom_locality: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                               Options.ttl: 2592000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                       Options.enable_blob_files: false
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                           Options.min_blob_size: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:           Options.merge_operator: None
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:        Options.compaction_filter: None
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5633efb76600)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5633efb6cdd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:          Options.compression: LZ4
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:             Options.num_levels: 7
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                           Options.bloom_locality: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                               Options.ttl: 2592000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                       Options.enable_blob_files: false
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                           Options.min_blob_size: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:           Options.merge_operator: None
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:        Options.compaction_filter: None
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5633efb76600)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5633efb6cdd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:          Options.compression: LZ4
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:             Options.num_levels: 7
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                           Options.bloom_locality: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                               Options.ttl: 2592000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                       Options.enable_blob_files: false
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                           Options.min_blob_size: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:           Options.merge_operator: None
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:        Options.compaction_filter: None
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5633efb76600)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5633efb6cdd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:          Options.compression: LZ4
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:             Options.num_levels: 7
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                           Options.bloom_locality: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                               Options.ttl: 2592000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                       Options.enable_blob_files: false
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                           Options.min_blob_size: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:           Options.merge_operator: None
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:        Options.compaction_filter: None
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5633efb765c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5633efb6c430
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:          Options.compression: LZ4
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:             Options.num_levels: 7
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                           Options.bloom_locality: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                               Options.ttl: 2592000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                       Options.enable_blob_files: false
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                           Options.min_blob_size: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:           Options.merge_operator: None
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:        Options.compaction_filter: None
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5633efb765c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5633efb6c430
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:          Options.compression: LZ4
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:             Options.num_levels: 7
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                           Options.bloom_locality: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                               Options.ttl: 2592000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                       Options.enable_blob_files: false
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                           Options.min_blob_size: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:           Options.merge_operator: None
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:        Options.compaction_filter: None
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5633efb765c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5633efb6c430
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:          Options.compression: LZ4
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:             Options.num_levels: 7
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                           Options.bloom_locality: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                               Options.ttl: 2592000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                       Options.enable_blob_files: false
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                           Options.min_blob_size: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: dfb129aa-a58b-42ea-bfc0-d0183185d57f
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764397114513287, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764397114513504, "job": 1, "event": "recovery_finished"}
Nov 29 06:18:34 compute-0 ceph-osd[85162]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old nid_max 1025
Nov 29 06:18:34 compute-0 ceph-osd[85162]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old blobid_max 10240
Nov 29 06:18:34 compute-0 ceph-osd[85162]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Nov 29 06:18:34 compute-0 ceph-osd[85162]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta min_alloc_size 0x1000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: freelist init
Nov 29 06:18:34 compute-0 ceph-osd[85162]: freelist _read_cfg
Nov 29 06:18:34 compute-0 ceph-osd[85162]: bluestore(/var/lib/ceph/osd/ceph-1) _init_alloc loaded 7.0 GiB in 2 extents, allocator type hybrid, capacity 0x1bfc00000, block size 0x1000, free 0x1bfbfd000, fragmentation 5.5e-07
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Nov 29 06:18:34 compute-0 ceph-osd[85162]: bluefs umount
Nov 29 06:18:34 compute-0 ceph-osd[85162]: bdev(0x5633f09ad400 /var/lib/ceph/osd/ceph-1/block) close
Nov 29 06:18:34 compute-0 podman[85522]: 2025-11-29 06:18:34.606498025 +0000 UTC m=+0.029003436 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: bdev(0x5633f09ad400 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 29 06:18:34 compute-0 ceph-osd[85162]: bdev(0x5633f09ad400 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 29 06:18:34 compute-0 ceph-osd[85162]: bdev(0x5633f09ad400 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 06:18:34 compute-0 ceph-osd[85162]: bdev(0x5633f09ad400 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 06:18:34 compute-0 ceph-osd[85162]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 7.0 GiB
Nov 29 06:18:34 compute-0 ceph-osd[85162]: bluefs mount
Nov 29 06:18:34 compute-0 ceph-osd[85162]: bluefs _init_alloc shared, id 1, capacity 0x1bfc00000, block size 0x10000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: bluefs mount shared_bdev_used = 4718592
Nov 29 06:18:34 compute-0 ceph-osd[85162]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,7136398540 db.slow,7136398540
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: RocksDB version: 7.9.2
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Git sha 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: DB SUMMARY
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: DB Session ID:  2QR1MYHZ2PW1Z4CTUV0F
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: CURRENT file:  CURRENT
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: IDENTITY file:  IDENTITY
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                         Options.error_if_exists: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                       Options.create_if_missing: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                         Options.paranoid_checks: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                                     Options.env: 0x5633efbb8700
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                                Options.info_log: 0x5633efb77860
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.max_file_opening_threads: 16
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                              Options.statistics: (nil)
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                               Options.use_fsync: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                       Options.max_log_file_size: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                         Options.allow_fallocate: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                        Options.use_direct_reads: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:          Options.create_missing_column_families: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                              Options.db_log_dir: 
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                                 Options.wal_dir: db.wal
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                   Options.advise_random_on_open: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                    Options.write_buffer_manager: 0x5633f0a86960
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                            Options.rate_limiter: (nil)
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                  Options.unordered_write: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                               Options.row_cache: None
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                              Options.wal_filter: None
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:             Options.allow_ingest_behind: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:             Options.two_write_queues: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:             Options.manual_wal_flush: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:             Options.wal_compression: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:             Options.atomic_flush: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                 Options.log_readahead_size: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:             Options.allow_data_in_errors: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:             Options.db_host_id: __hostname__
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:             Options.max_background_jobs: 4
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:             Options.max_background_compactions: -1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:             Options.max_subcompactions: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                          Options.max_open_files: -1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                          Options.bytes_per_sync: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                  Options.max_background_flushes: -1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Compression algorithms supported:
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         kZSTD supported: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         kXpressCompression supported: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         kBZip2Compression supported: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         kZSTDNotFinalCompression supported: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         kLZ4Compression supported: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         kZlibCompression supported: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         kLZ4HCCompression supported: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         kSnappyCompression supported: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:        Options.compaction_filter: None
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5633efb80860)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5633efb6d610
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:          Options.compression: LZ4
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:             Options.num_levels: 7
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                           Options.bloom_locality: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                               Options.ttl: 2592000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                       Options.enable_blob_files: false
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                           Options.min_blob_size: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:           Options.merge_operator: None
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:        Options.compaction_filter: None
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5633efb80860)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5633efb6d610
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:          Options.compression: LZ4
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:             Options.num_levels: 7
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                           Options.bloom_locality: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                               Options.ttl: 2592000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                       Options.enable_blob_files: false
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                           Options.min_blob_size: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:           Options.merge_operator: None
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:        Options.compaction_filter: None
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5633efb80860)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5633efb6d610
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:          Options.compression: LZ4
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:             Options.num_levels: 7
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                           Options.bloom_locality: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                               Options.ttl: 2592000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                       Options.enable_blob_files: false
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                           Options.min_blob_size: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:           Options.merge_operator: None
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:        Options.compaction_filter: None
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5633efb80860)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5633efb6d610
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:          Options.compression: LZ4
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:             Options.num_levels: 7
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                           Options.bloom_locality: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                               Options.ttl: 2592000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                       Options.enable_blob_files: false
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                           Options.min_blob_size: 0
Nov 29 06:18:34 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:           Options.merge_operator: None
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:        Options.compaction_filter: None
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5633efb80860)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5633efb6d610
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:          Options.compression: LZ4
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:             Options.num_levels: 7
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                           Options.bloom_locality: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                               Options.ttl: 2592000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                       Options.enable_blob_files: false
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                           Options.min_blob_size: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:           Options.merge_operator: None
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:        Options.compaction_filter: None
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5633efb80860)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5633efb6d610
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:          Options.compression: LZ4
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:             Options.num_levels: 7
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                           Options.bloom_locality: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                               Options.ttl: 2592000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                       Options.enable_blob_files: false
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                           Options.min_blob_size: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:           Options.merge_operator: None
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:        Options.compaction_filter: None
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5633efb80860)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5633efb6d610
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:          Options.compression: LZ4
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:             Options.num_levels: 7
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                           Options.bloom_locality: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                               Options.ttl: 2592000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                       Options.enable_blob_files: false
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                           Options.min_blob_size: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:           Options.merge_operator: None
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:        Options.compaction_filter: None
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5633efb808a0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5633efb6d770
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:          Options.compression: LZ4
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:             Options.num_levels: 7
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                           Options.bloom_locality: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                               Options.ttl: 2592000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                       Options.enable_blob_files: false
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                           Options.min_blob_size: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:           Options.merge_operator: None
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:        Options.compaction_filter: None
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5633efb808a0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5633efb6d770
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:          Options.compression: LZ4
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:             Options.num_levels: 7
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                           Options.bloom_locality: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                               Options.ttl: 2592000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                       Options.enable_blob_files: false
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                           Options.min_blob_size: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:           Options.merge_operator: None
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:        Options.compaction_filter: None
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5633efb808a0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5633efb6d770
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:          Options.compression: LZ4
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:             Options.num_levels: 7
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                           Options.bloom_locality: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                               Options.ttl: 2592000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                       Options.enable_blob_files: false
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                           Options.min_blob_size: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: dfb129aa-a58b-42ea-bfc0-d0183185d57f
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764397114790726, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 29 06:18:34 compute-0 ceph-osd[85162]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 29 06:18:34 compute-0 podman[85522]: 2025-11-29 06:18:34.825109513 +0000 UTC m=+0.247614874 container create b0d7ff18d11f046ee283d785e964afdebf733acc8af2f2ea0ee11b18d0b77737 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_edison, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 29 06:18:35 compute-0 ceph-osd[85162]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764397115009622, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764397114, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dfb129aa-a58b-42ea-bfc0-d0183185d57f", "db_session_id": "2QR1MYHZ2PW1Z4CTUV0F", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Nov 29 06:18:35 compute-0 systemd[1]: Started libpod-conmon-b0d7ff18d11f046ee283d785e964afdebf733acc8af2f2ea0ee11b18d0b77737.scope.
Nov 29 06:18:35 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:18:35 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:18:35 compute-0 ceph-osd[85162]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764397115304030, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764397115, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dfb129aa-a58b-42ea-bfc0-d0183185d57f", "db_session_id": "2QR1MYHZ2PW1Z4CTUV0F", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Nov 29 06:18:35 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:18:35 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:18:35 compute-0 ceph-mon[74654]: pgmap v48: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 06:18:35 compute-0 podman[85522]: 2025-11-29 06:18:35.532079641 +0000 UTC m=+0.954585052 container init b0d7ff18d11f046ee283d785e964afdebf733acc8af2f2ea0ee11b18d0b77737 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_edison, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Nov 29 06:18:35 compute-0 ceph-osd[85162]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764397115532383, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764397115, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dfb129aa-a58b-42ea-bfc0-d0183185d57f", "db_session_id": "2QR1MYHZ2PW1Z4CTUV0F", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Nov 29 06:18:35 compute-0 ceph-osd[85162]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764397115534133, "job": 1, "event": "recovery_finished"}
Nov 29 06:18:35 compute-0 ceph-osd[85162]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Nov 29 06:18:35 compute-0 podman[85522]: 2025-11-29 06:18:35.543527547 +0000 UTC m=+0.966032888 container start b0d7ff18d11f046ee283d785e964afdebf733acc8af2f2ea0ee11b18d0b77737 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_edison, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 06:18:35 compute-0 naughty_edison[85721]: 167 167
Nov 29 06:18:35 compute-0 systemd[1]: libpod-b0d7ff18d11f046ee283d785e964afdebf733acc8af2f2ea0ee11b18d0b77737.scope: Deactivated successfully.
Nov 29 06:18:35 compute-0 podman[85522]: 2025-11-29 06:18:35.557798313 +0000 UTC m=+0.980303684 container attach b0d7ff18d11f046ee283d785e964afdebf733acc8af2f2ea0ee11b18d0b77737 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_edison, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 06:18:35 compute-0 podman[85522]: 2025-11-29 06:18:35.558678598 +0000 UTC m=+0.981183969 container died b0d7ff18d11f046ee283d785e964afdebf733acc8af2f2ea0ee11b18d0b77737 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_edison, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 29 06:18:35 compute-0 ceph-osd[85162]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x5633efc3fc00
Nov 29 06:18:35 compute-0 ceph-osd[85162]: rocksdb: DB pointer 0x5633f0a6fa00
Nov 29 06:18:35 compute-0 ceph-osd[85162]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 29 06:18:35 compute-0 ceph-osd[85162]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super from 4, latest 4
Nov 29 06:18:35 compute-0 ceph-osd[85162]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super done
Nov 29 06:18:35 compute-0 ceph-osd[85162]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 06:18:35 compute-0 ceph-osd[85162]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.8 total, 0.8 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.22              0.00         1    0.219       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.22              0.00         1    0.219       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.22              0.00         1    0.219       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.22              0.00         1    0.219       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.8 total, 0.8 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.2 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.2 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5633efb6d610#2 capacity: 460.80 MB usage: 0.94 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.8 total, 0.8 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5633efb6d610#2 capacity: 460.80 MB usage: 0.94 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.8 total, 0.8 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5633efb6d610#2 capacity: 460.80 MB usage: 0.94 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.8 total, 0.8 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5633efb6d610#2 capacity: 460.80 MB usage: 0.94 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.29              0.00         1    0.294       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.29              0.00         1    0.294       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.29              0.00         1    0.294       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.29              0.00         1    0.294       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.8 total, 0.8 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.3 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.3 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5633efb6d610#2 capacity: 460.80 MB usage: 0.94 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.8 total, 0.8 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5633efb6d610#2 capacity: 460.80 MB usage: 0.94 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.8 total, 0.8 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5633efb6d610#2 capacity: 460.80 MB usage: 0.94 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.8 total, 0.8 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5633efb6d770#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.8 total, 0.8 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5633efb6d770#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.23              0.00         1    0.228       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.23              0.00         1    0.228       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.23              0.00         1    0.228       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.23              0.00         1    0.228       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.8 total, 0.8 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.2 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.2 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5633efb6d770#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.8 total, 0.8 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5633efb6d610#2 capacity: 460.80 MB usage: 0.94 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.8 total, 0.8 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5633efb6d610#2 capacity: 460.80 MB usage: 0.94 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 29 06:18:35 compute-0 ceph-osd[85162]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Nov 29 06:18:35 compute-0 ceph-osd[85162]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Nov 29 06:18:35 compute-0 ceph-osd[85162]: _get_class not permitted to load lua
Nov 29 06:18:35 compute-0 ceph-osd[85162]: _get_class not permitted to load sdk
Nov 29 06:18:35 compute-0 ceph-osd[85162]: _get_class not permitted to load test_remote_reads
Nov 29 06:18:35 compute-0 ceph-osd[85162]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Nov 29 06:18:35 compute-0 ceph-osd[85162]: osd.1 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Nov 29 06:18:35 compute-0 ceph-osd[85162]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Nov 29 06:18:35 compute-0 ceph-osd[85162]: osd.1 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Nov 29 06:18:35 compute-0 ceph-osd[85162]: osd.1 0 load_pgs
Nov 29 06:18:35 compute-0 ceph-osd[85162]: osd.1 0 load_pgs opened 0 pgs
Nov 29 06:18:35 compute-0 ceph-osd[85162]: osd.1 0 log_to_monitors true
Nov 29 06:18:35 compute-0 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-osd-1[85158]: 2025-11-29T06:18:35.582+0000 7f4f3ca3a740 -1 osd.1 0 log_to_monitors true
Nov 29 06:18:35 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} v 0) v1
Nov 29 06:18:35 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6802/2704652432,v1:192.168.122.100:6803/2704652432]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Nov 29 06:18:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-92a2d3340cbc862a541bed5b97f4c12f115a85789cdb94d0e4d810d21bad5ac9-merged.mount: Deactivated successfully.
Nov 29 06:18:35 compute-0 podman[85522]: 2025-11-29 06:18:35.626790395 +0000 UTC m=+1.049295746 container remove b0d7ff18d11f046ee283d785e964afdebf733acc8af2f2ea0ee11b18d0b77737 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_edison, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 06:18:35 compute-0 systemd[1]: libpod-conmon-b0d7ff18d11f046ee283d785e964afdebf733acc8af2f2ea0ee11b18d0b77737.scope: Deactivated successfully.
Nov 29 06:18:35 compute-0 podman[85777]: 2025-11-29 06:18:35.829771268 +0000 UTC m=+0.068808328 container create 30a006593d7f5630461b741d473a48105ab1bcac46e3232be6820248e7056b35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_golick, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 06:18:35 compute-0 systemd[1]: Started libpod-conmon-30a006593d7f5630461b741d473a48105ab1bcac46e3232be6820248e7056b35.scope.
Nov 29 06:18:35 compute-0 podman[85777]: 2025-11-29 06:18:35.804399837 +0000 UTC m=+0.043436937 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:18:35 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:18:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/646e62d3f09f54e0e8abc5665b3148e1af7d87e20126a3ae9d5ed863a52c1fed/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 06:18:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/646e62d3f09f54e0e8abc5665b3148e1af7d87e20126a3ae9d5ed863a52c1fed/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:18:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/646e62d3f09f54e0e8abc5665b3148e1af7d87e20126a3ae9d5ed863a52c1fed/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:18:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/646e62d3f09f54e0e8abc5665b3148e1af7d87e20126a3ae9d5ed863a52c1fed/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 06:18:35 compute-0 podman[85777]: 2025-11-29 06:18:35.930771061 +0000 UTC m=+0.169808101 container init 30a006593d7f5630461b741d473a48105ab1bcac46e3232be6820248e7056b35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_golick, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 06:18:35 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v49: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 06:18:35 compute-0 podman[85777]: 2025-11-29 06:18:35.943815542 +0000 UTC m=+0.182852612 container start 30a006593d7f5630461b741d473a48105ab1bcac46e3232be6820248e7056b35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_golick, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 06:18:35 compute-0 podman[85777]: 2025-11-29 06:18:35.948821135 +0000 UTC m=+0.187858255 container attach 30a006593d7f5630461b741d473a48105ab1bcac46e3232be6820248e7056b35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_golick, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 06:18:36 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} v 0) v1
Nov 29 06:18:36 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.101:6800/2367106429,v1:192.168.122.101:6801/2367106429]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Nov 29 06:18:36 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e5 do_prune osdmap full prune enabled
Nov 29 06:18:36 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e5 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 29 06:18:36 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6802/2704652432,v1:192.168.122.100:6803/2704652432]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Nov 29 06:18:36 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.101:6800/2367106429,v1:192.168.122.101:6801/2367106429]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Nov 29 06:18:36 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e6 e6: 2 total, 0 up, 2 in
Nov 29 06:18:36 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:18:36 compute-0 ceph-mon[74654]: from='osd.1 [v2:192.168.122.100:6802/2704652432,v1:192.168.122.100:6803/2704652432]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Nov 29 06:18:36 compute-0 ceph-mon[74654]: from='osd.0 [v2:192.168.122.101:6800/2367106429,v1:192.168.122.101:6801/2367106429]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Nov 29 06:18:36 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e6: 2 total, 0 up, 2 in
Nov 29 06:18:36 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 1, "weight":0.0068, "args": ["host=compute-0", "root=default"]} v 0) v1
Nov 29 06:18:36 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6802/2704652432,v1:192.168.122.100:6803/2704652432]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0068, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 29 06:18:36 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e6 create-or-move crush item name 'osd.1' initial_weight 0.0068000000000000005 at location {host=compute-0,root=default}
Nov 29 06:18:36 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 0, "weight":0.0068, "args": ["host=compute-1", "root=default"]} v 0) v1
Nov 29 06:18:36 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.101:6800/2367106429,v1:192.168.122.101:6801/2367106429]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0068, "args": ["host=compute-1", "root=default"]}]: dispatch
Nov 29 06:18:36 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e6 create-or-move crush item name 'osd.0' initial_weight 0.0068000000000000005 at location {host=compute-1,root=default}
Nov 29 06:18:36 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 29 06:18:36 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 06:18:36 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 29 06:18:36 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 06:18:36 compute-0 ceph-mgr[74948]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 29 06:18:36 compute-0 ceph-mgr[74948]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 29 06:18:36 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Nov 29 06:18:36 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Nov 29 06:18:36 compute-0 competent_golick[85794]: {
Nov 29 06:18:36 compute-0 competent_golick[85794]:     "91f280f1-e534-4adc-bf70-98711580c2dd": {
Nov 29 06:18:36 compute-0 competent_golick[85794]:         "ceph_fsid": "336ec58c-893b-528f-a0c1-6ed1196bc047",
Nov 29 06:18:36 compute-0 competent_golick[85794]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 06:18:36 compute-0 competent_golick[85794]:         "osd_id": 1,
Nov 29 06:18:36 compute-0 competent_golick[85794]:         "osd_uuid": "91f280f1-e534-4adc-bf70-98711580c2dd",
Nov 29 06:18:36 compute-0 competent_golick[85794]:         "type": "bluestore"
Nov 29 06:18:36 compute-0 competent_golick[85794]:     }
Nov 29 06:18:36 compute-0 competent_golick[85794]: }
Nov 29 06:18:36 compute-0 systemd[1]: libpod-30a006593d7f5630461b741d473a48105ab1bcac46e3232be6820248e7056b35.scope: Deactivated successfully.
Nov 29 06:18:36 compute-0 conmon[85794]: conmon 30a006593d7f5630461b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-30a006593d7f5630461b741d473a48105ab1bcac46e3232be6820248e7056b35.scope/container/memory.events
Nov 29 06:18:36 compute-0 podman[85777]: 2025-11-29 06:18:36.856142031 +0000 UTC m=+1.095179091 container died 30a006593d7f5630461b741d473a48105ab1bcac46e3232be6820248e7056b35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_golick, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 29 06:18:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-646e62d3f09f54e0e8abc5665b3148e1af7d87e20126a3ae9d5ed863a52c1fed-merged.mount: Deactivated successfully.
Nov 29 06:18:36 compute-0 podman[85777]: 2025-11-29 06:18:36.921077688 +0000 UTC m=+1.160114758 container remove 30a006593d7f5630461b741d473a48105ab1bcac46e3232be6820248e7056b35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_golick, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 29 06:18:36 compute-0 systemd[1]: libpod-conmon-30a006593d7f5630461b741d473a48105ab1bcac46e3232be6820248e7056b35.scope: Deactivated successfully.
Nov 29 06:18:36 compute-0 sudo[85257]: pam_unix(sudo:session): session closed for user root
Nov 29 06:18:36 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 06:18:36 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:18:36 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 06:18:36 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:18:37 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 06:18:37 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 06:18:37 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:18:37 compute-0 sudo[85827]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:18:37 compute-0 sudo[85827]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:18:37 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e6 do_prune osdmap full prune enabled
Nov 29 06:18:37 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e6 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 29 06:18:37 compute-0 sudo[85827]: pam_unix(sudo:session): session closed for user root
Nov 29 06:18:37 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6802/2704652432,v1:192.168.122.100:6803/2704652432]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0068, "args": ["host=compute-0", "root=default"]}]': finished
Nov 29 06:18:37 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.101:6800/2367106429,v1:192.168.122.101:6801/2367106429]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0068, "args": ["host=compute-1", "root=default"]}]': finished
Nov 29 06:18:37 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e7 e7: 2 total, 0 up, 2 in
Nov 29 06:18:37 compute-0 ceph-osd[85162]: osd.1 0 done with init, starting boot process
Nov 29 06:18:37 compute-0 ceph-osd[85162]: osd.1 0 start_boot
Nov 29 06:18:37 compute-0 ceph-osd[85162]: osd.1 0 maybe_override_options_for_qos osd_max_backfills set to 1
Nov 29 06:18:37 compute-0 ceph-osd[85162]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Nov 29 06:18:37 compute-0 ceph-osd[85162]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Nov 29 06:18:37 compute-0 ceph-osd[85162]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Nov 29 06:18:37 compute-0 ceph-osd[85162]: osd.1 0  bench count 12288000 bsize 4 KiB
Nov 29 06:18:37 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e7: 2 total, 0 up, 2 in
Nov 29 06:18:37 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 29 06:18:37 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 06:18:37 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 29 06:18:37 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 06:18:37 compute-0 ceph-mgr[74948]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 29 06:18:37 compute-0 ceph-mgr[74948]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 29 06:18:37 compute-0 ceph-mon[74654]: pgmap v49: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 06:18:37 compute-0 ceph-mon[74654]: from='osd.1 [v2:192.168.122.100:6802/2704652432,v1:192.168.122.100:6803/2704652432]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Nov 29 06:18:37 compute-0 ceph-mon[74654]: from='osd.0 [v2:192.168.122.101:6800/2367106429,v1:192.168.122.101:6801/2367106429]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Nov 29 06:18:37 compute-0 ceph-mon[74654]: osdmap e6: 2 total, 0 up, 2 in
Nov 29 06:18:37 compute-0 ceph-mon[74654]: from='osd.1 [v2:192.168.122.100:6802/2704652432,v1:192.168.122.100:6803/2704652432]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0068, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 29 06:18:37 compute-0 ceph-mon[74654]: from='osd.0 [v2:192.168.122.101:6800/2367106429,v1:192.168.122.101:6801/2367106429]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0068, "args": ["host=compute-1", "root=default"]}]: dispatch
Nov 29 06:18:37 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 06:18:37 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 06:18:37 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:18:37 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:18:37 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:18:37 compute-0 ceph-mgr[74948]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/2704652432; not ready for session (expect reconnect)
Nov 29 06:18:37 compute-0 ceph-mgr[74948]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/2367106429; not ready for session (expect reconnect)
Nov 29 06:18:37 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 29 06:18:37 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 06:18:37 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 29 06:18:37 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 06:18:37 compute-0 ceph-mgr[74948]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 29 06:18:37 compute-0 ceph-mgr[74948]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 29 06:18:37 compute-0 sudo[85852]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 06:18:37 compute-0 sudo[85852]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:18:37 compute-0 sudo[85852]: pam_unix(sudo:session): session closed for user root
Nov 29 06:18:37 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v52: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 06:18:38 compute-0 sudo[85877]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:18:38 compute-0 sudo[85877]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:18:38 compute-0 sudo[85877]: pam_unix(sudo:session): session closed for user root
Nov 29 06:18:38 compute-0 sudo[85902]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:18:38 compute-0 sudo[85902]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:18:38 compute-0 sudo[85902]: pam_unix(sudo:session): session closed for user root
Nov 29 06:18:38 compute-0 sudo[85927]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:18:38 compute-0 sudo[85927]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:18:38 compute-0 sudo[85927]: pam_unix(sudo:session): session closed for user root
Nov 29 06:18:38 compute-0 sudo[85952]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 29 06:18:38 compute-0 sudo[85952]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:18:38 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 06:18:38 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:18:38 compute-0 ceph-mgr[74948]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/2704652432; not ready for session (expect reconnect)
Nov 29 06:18:38 compute-0 ceph-mgr[74948]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/2367106429; not ready for session (expect reconnect)
Nov 29 06:18:38 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 29 06:18:38 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 06:18:38 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 29 06:18:38 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 06:18:38 compute-0 ceph-mgr[74948]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 29 06:18:38 compute-0 ceph-mgr[74948]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 29 06:18:38 compute-0 ceph-mon[74654]: from='osd.1 [v2:192.168.122.100:6802/2704652432,v1:192.168.122.100:6803/2704652432]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0068, "args": ["host=compute-0", "root=default"]}]': finished
Nov 29 06:18:38 compute-0 ceph-mon[74654]: from='osd.0 [v2:192.168.122.101:6800/2367106429,v1:192.168.122.101:6801/2367106429]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0068, "args": ["host=compute-1", "root=default"]}]': finished
Nov 29 06:18:38 compute-0 ceph-mon[74654]: osdmap e7: 2 total, 0 up, 2 in
Nov 29 06:18:38 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 06:18:38 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 06:18:38 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 06:18:38 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 06:18:38 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:18:38 compute-0 podman[86047]: 2025-11-29 06:18:38.875327343 +0000 UTC m=+0.081176920 container exec c3c8680245c67f710ba1b448e2d4c77c4c02bc368d31276f0332ad942957e3cf (image=quay.io/ceph/ceph:v18, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 29 06:18:38 compute-0 podman[86047]: 2025-11-29 06:18:38.98737673 +0000 UTC m=+0.193226227 container exec_died c3c8680245c67f710ba1b448e2d4c77c4c02bc368d31276f0332ad942957e3cf (image=quay.io/ceph/ceph:v18, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mon-compute-0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 29 06:18:39 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 06:18:39 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:18:39 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 06:18:39 compute-0 ceph-mgr[74948]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/2704652432; not ready for session (expect reconnect)
Nov 29 06:18:39 compute-0 ceph-mgr[74948]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/2367106429; not ready for session (expect reconnect)
Nov 29 06:18:39 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 29 06:18:39 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 06:18:39 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 29 06:18:39 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 06:18:39 compute-0 ceph-mgr[74948]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 29 06:18:39 compute-0 ceph-mgr[74948]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 29 06:18:39 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:18:39 compute-0 ceph-mon[74654]: purged_snaps scrub starts
Nov 29 06:18:39 compute-0 ceph-mon[74654]: purged_snaps scrub ok
Nov 29 06:18:39 compute-0 ceph-mon[74654]: purged_snaps scrub starts
Nov 29 06:18:39 compute-0 ceph-mon[74654]: purged_snaps scrub ok
Nov 29 06:18:39 compute-0 ceph-mon[74654]: pgmap v52: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 06:18:39 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 06:18:39 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 06:18:39 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:18:39 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 06:18:39 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 06:18:39 compute-0 sudo[85952]: pam_unix(sudo:session): session closed for user root
Nov 29 06:18:39 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 06:18:39 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:18:39 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 06:18:39 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:18:39 compute-0 sudo[86130]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:18:39 compute-0 sudo[86130]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:18:39 compute-0 sudo[86130]: pam_unix(sudo:session): session closed for user root
Nov 29 06:18:39 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v53: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 06:18:39 compute-0 sudo[86155]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:18:39 compute-0 sudo[86155]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:18:39 compute-0 sudo[86155]: pam_unix(sudo:session): session closed for user root
Nov 29 06:18:40 compute-0 sudo[86180]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:18:40 compute-0 sudo[86180]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:18:40 compute-0 sudo[86180]: pam_unix(sudo:session): session closed for user root
Nov 29 06:18:40 compute-0 sudo[86205]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 06:18:40 compute-0 sudo[86205]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:18:40 compute-0 ceph-mgr[74948]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/2704652432; not ready for session (expect reconnect)
Nov 29 06:18:40 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 29 06:18:40 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 06:18:40 compute-0 ceph-mgr[74948]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 29 06:18:40 compute-0 ceph-mgr[74948]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/2367106429; not ready for session (expect reconnect)
Nov 29 06:18:40 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 29 06:18:40 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 06:18:40 compute-0 ceph-mgr[74948]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 29 06:18:40 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:18:40 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:18:40 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:18:40 compute-0 ceph-mon[74654]: pgmap v53: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 06:18:40 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 06:18:40 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 06:18:40 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 06:18:40 compute-0 sudo[86205]: pam_unix(sudo:session): session closed for user root
Nov 29 06:18:40 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:18:40 compute-0 sudo[86261]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:18:40 compute-0 sudo[86261]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:18:40 compute-0 sudo[86261]: pam_unix(sudo:session): session closed for user root
Nov 29 06:18:40 compute-0 sudo[86286]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:18:40 compute-0 sudo[86286]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:18:40 compute-0 sudo[86286]: pam_unix(sudo:session): session closed for user root
Nov 29 06:18:41 compute-0 sudo[86311]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:18:41 compute-0 sudo[86311]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:18:41 compute-0 sudo[86311]: pam_unix(sudo:session): session closed for user root
Nov 29 06:18:41 compute-0 sudo[86336]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -- inventory --format=json-pretty --filter-for-batch
Nov 29 06:18:41 compute-0 sudo[86336]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:18:41 compute-0 ceph-mgr[74948]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/2704652432; not ready for session (expect reconnect)
Nov 29 06:18:41 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 29 06:18:41 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 06:18:41 compute-0 ceph-mgr[74948]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 29 06:18:41 compute-0 ceph-mgr[74948]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/2367106429; not ready for session (expect reconnect)
Nov 29 06:18:41 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 29 06:18:41 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 06:18:41 compute-0 ceph-mgr[74948]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 29 06:18:41 compute-0 podman[86400]: 2025-11-29 06:18:41.645189075 +0000 UTC m=+0.094822608 container create 6b2fe0ba7b4f2cfbf59fbaa3070f28df6276301e2391d6003dbd5c3be7133ec2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_lehmann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 06:18:41 compute-0 podman[86400]: 2025-11-29 06:18:41.590632723 +0000 UTC m=+0.040266316 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:18:41 compute-0 systemd[1]: Started libpod-conmon-6b2fe0ba7b4f2cfbf59fbaa3070f28df6276301e2391d6003dbd5c3be7133ec2.scope.
Nov 29 06:18:41 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:18:41 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:18:41 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 06:18:41 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 06:18:41 compute-0 podman[86400]: 2025-11-29 06:18:41.748127462 +0000 UTC m=+0.197761015 container init 6b2fe0ba7b4f2cfbf59fbaa3070f28df6276301e2391d6003dbd5c3be7133ec2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_lehmann, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 29 06:18:41 compute-0 podman[86400]: 2025-11-29 06:18:41.802231431 +0000 UTC m=+0.251864984 container start 6b2fe0ba7b4f2cfbf59fbaa3070f28df6276301e2391d6003dbd5c3be7133ec2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_lehmann, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 06:18:41 compute-0 goofy_lehmann[86415]: 167 167
Nov 29 06:18:41 compute-0 systemd[1]: libpod-6b2fe0ba7b4f2cfbf59fbaa3070f28df6276301e2391d6003dbd5c3be7133ec2.scope: Deactivated successfully.
Nov 29 06:18:41 compute-0 podman[86400]: 2025-11-29 06:18:41.82715466 +0000 UTC m=+0.276788203 container attach 6b2fe0ba7b4f2cfbf59fbaa3070f28df6276301e2391d6003dbd5c3be7133ec2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_lehmann, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 06:18:41 compute-0 podman[86400]: 2025-11-29 06:18:41.828036435 +0000 UTC m=+0.277669968 container died 6b2fe0ba7b4f2cfbf59fbaa3070f28df6276301e2391d6003dbd5c3be7133ec2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_lehmann, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 06:18:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-5bff137a26f019947d08e7348c43affbf17661e051a393a05cb96d0c48b33894-merged.mount: Deactivated successfully.
Nov 29 06:18:41 compute-0 podman[86400]: 2025-11-29 06:18:41.925150807 +0000 UTC m=+0.374784340 container remove 6b2fe0ba7b4f2cfbf59fbaa3070f28df6276301e2391d6003dbd5c3be7133ec2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_lehmann, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 06:18:41 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v54: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 06:18:41 compute-0 systemd[1]: libpod-conmon-6b2fe0ba7b4f2cfbf59fbaa3070f28df6276301e2391d6003dbd5c3be7133ec2.scope: Deactivated successfully.
Nov 29 06:18:42 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e7 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 06:18:42 compute-0 podman[86439]: 2025-11-29 06:18:42.135329915 +0000 UTC m=+0.092457381 container create b34fd008fc6133ce07387f660b49a9ab5ab042b12519fa23e83dad8a3c1fc388 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_chandrasekhar, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 29 06:18:42 compute-0 podman[86439]: 2025-11-29 06:18:42.068316699 +0000 UTC m=+0.025444245 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:18:42 compute-0 systemd[1]: Started libpod-conmon-b34fd008fc6133ce07387f660b49a9ab5ab042b12519fa23e83dad8a3c1fc388.scope.
Nov 29 06:18:42 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:18:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fe01acc49945aeb98bf1aa581ac0a4c85a79ae0dc167f6a1f350b9968249b46/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 06:18:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fe01acc49945aeb98bf1aa581ac0a4c85a79ae0dc167f6a1f350b9968249b46/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:18:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fe01acc49945aeb98bf1aa581ac0a4c85a79ae0dc167f6a1f350b9968249b46/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:18:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fe01acc49945aeb98bf1aa581ac0a4c85a79ae0dc167f6a1f350b9968249b46/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 06:18:42 compute-0 podman[86439]: 2025-11-29 06:18:42.378730288 +0000 UTC m=+0.335857754 container init b34fd008fc6133ce07387f660b49a9ab5ab042b12519fa23e83dad8a3c1fc388 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_chandrasekhar, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 29 06:18:42 compute-0 podman[86439]: 2025-11-29 06:18:42.388319701 +0000 UTC m=+0.345447177 container start b34fd008fc6133ce07387f660b49a9ab5ab042b12519fa23e83dad8a3c1fc388 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_chandrasekhar, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 29 06:18:42 compute-0 podman[86439]: 2025-11-29 06:18:42.422071761 +0000 UTC m=+0.379199237 container attach b34fd008fc6133ce07387f660b49a9ab5ab042b12519fa23e83dad8a3c1fc388 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_chandrasekhar, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True)
Nov 29 06:18:42 compute-0 ceph-mgr[74948]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/2704652432; not ready for session (expect reconnect)
Nov 29 06:18:42 compute-0 ceph-mgr[74948]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 29 06:18:42 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 29 06:18:42 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 06:18:42 compute-0 ceph-mgr[74948]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/2367106429; not ready for session (expect reconnect)
Nov 29 06:18:42 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 29 06:18:42 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 06:18:42 compute-0 ceph-mgr[74948]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 29 06:18:42 compute-0 ceph-mon[74654]: pgmap v54: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 06:18:42 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 06:18:42 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 06:18:43 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 06:18:43 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:18:43 compute-0 ceph-mgr[74948]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/2367106429; not ready for session (expect reconnect)
Nov 29 06:18:43 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 29 06:18:43 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 06:18:43 compute-0 ceph-mgr[74948]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 29 06:18:43 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 06:18:43 compute-0 ceph-mgr[74948]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/2704652432; not ready for session (expect reconnect)
Nov 29 06:18:43 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 29 06:18:43 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 06:18:43 compute-0 ceph-mgr[74948]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 29 06:18:43 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:18:43 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 06:18:43 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:18:43 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 06:18:43 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:18:43 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0) v1
Nov 29 06:18:43 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Nov 29 06:18:43 compute-0 ceph-mgr[74948]: [cephadm INFO root] Adjusting osd_memory_target on compute-1 to  5247M
Nov 29 06:18:43 compute-0 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-1 to  5247M
Nov 29 06:18:43 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) v1
Nov 29 06:18:43 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:18:43 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v55: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 06:18:44 compute-0 optimistic_chandrasekhar[86455]: [
Nov 29 06:18:44 compute-0 optimistic_chandrasekhar[86455]:     {
Nov 29 06:18:44 compute-0 optimistic_chandrasekhar[86455]:         "available": false,
Nov 29 06:18:44 compute-0 optimistic_chandrasekhar[86455]:         "ceph_device": false,
Nov 29 06:18:44 compute-0 optimistic_chandrasekhar[86455]:         "device_id": "QEMU_DVD-ROM_QM00001",
Nov 29 06:18:44 compute-0 optimistic_chandrasekhar[86455]:         "lsm_data": {},
Nov 29 06:18:44 compute-0 optimistic_chandrasekhar[86455]:         "lvs": [],
Nov 29 06:18:44 compute-0 optimistic_chandrasekhar[86455]:         "path": "/dev/sr0",
Nov 29 06:18:44 compute-0 optimistic_chandrasekhar[86455]:         "rejected_reasons": [
Nov 29 06:18:44 compute-0 optimistic_chandrasekhar[86455]:             "Has a FileSystem",
Nov 29 06:18:44 compute-0 optimistic_chandrasekhar[86455]:             "Insufficient space (<5GB)"
Nov 29 06:18:44 compute-0 optimistic_chandrasekhar[86455]:         ],
Nov 29 06:18:44 compute-0 optimistic_chandrasekhar[86455]:         "sys_api": {
Nov 29 06:18:44 compute-0 optimistic_chandrasekhar[86455]:             "actuators": null,
Nov 29 06:18:44 compute-0 optimistic_chandrasekhar[86455]:             "device_nodes": "sr0",
Nov 29 06:18:44 compute-0 optimistic_chandrasekhar[86455]:             "devname": "sr0",
Nov 29 06:18:44 compute-0 optimistic_chandrasekhar[86455]:             "human_readable_size": "482.00 KB",
Nov 29 06:18:44 compute-0 optimistic_chandrasekhar[86455]:             "id_bus": "ata",
Nov 29 06:18:44 compute-0 optimistic_chandrasekhar[86455]:             "model": "QEMU DVD-ROM",
Nov 29 06:18:44 compute-0 optimistic_chandrasekhar[86455]:             "nr_requests": "2",
Nov 29 06:18:44 compute-0 optimistic_chandrasekhar[86455]:             "parent": "/dev/sr0",
Nov 29 06:18:44 compute-0 optimistic_chandrasekhar[86455]:             "partitions": {},
Nov 29 06:18:44 compute-0 optimistic_chandrasekhar[86455]:             "path": "/dev/sr0",
Nov 29 06:18:44 compute-0 optimistic_chandrasekhar[86455]:             "removable": "1",
Nov 29 06:18:44 compute-0 optimistic_chandrasekhar[86455]:             "rev": "2.5+",
Nov 29 06:18:44 compute-0 optimistic_chandrasekhar[86455]:             "ro": "0",
Nov 29 06:18:44 compute-0 optimistic_chandrasekhar[86455]:             "rotational": "1",
Nov 29 06:18:44 compute-0 optimistic_chandrasekhar[86455]:             "sas_address": "",
Nov 29 06:18:44 compute-0 optimistic_chandrasekhar[86455]:             "sas_device_handle": "",
Nov 29 06:18:44 compute-0 optimistic_chandrasekhar[86455]:             "scheduler_mode": "mq-deadline",
Nov 29 06:18:44 compute-0 optimistic_chandrasekhar[86455]:             "sectors": 0,
Nov 29 06:18:44 compute-0 optimistic_chandrasekhar[86455]:             "sectorsize": "2048",
Nov 29 06:18:44 compute-0 optimistic_chandrasekhar[86455]:             "size": 493568.0,
Nov 29 06:18:44 compute-0 optimistic_chandrasekhar[86455]:             "support_discard": "2048",
Nov 29 06:18:44 compute-0 optimistic_chandrasekhar[86455]:             "type": "disk",
Nov 29 06:18:44 compute-0 optimistic_chandrasekhar[86455]:             "vendor": "QEMU"
Nov 29 06:18:44 compute-0 optimistic_chandrasekhar[86455]:         }
Nov 29 06:18:44 compute-0 optimistic_chandrasekhar[86455]:     }
Nov 29 06:18:44 compute-0 optimistic_chandrasekhar[86455]: ]
Nov 29 06:18:44 compute-0 systemd[1]: libpod-b34fd008fc6133ce07387f660b49a9ab5ab042b12519fa23e83dad8a3c1fc388.scope: Deactivated successfully.
Nov 29 06:18:44 compute-0 podman[86439]: 2025-11-29 06:18:44.154258879 +0000 UTC m=+2.111386345 container died b34fd008fc6133ce07387f660b49a9ab5ab042b12519fa23e83dad8a3c1fc388 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_chandrasekhar, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 06:18:44 compute-0 systemd[1]: libpod-b34fd008fc6133ce07387f660b49a9ab5ab042b12519fa23e83dad8a3c1fc388.scope: Consumed 1.781s CPU time.
Nov 29 06:18:44 compute-0 ceph-mgr[74948]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/2367106429; not ready for session (expect reconnect)
Nov 29 06:18:44 compute-0 ceph-mgr[74948]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/2704652432; not ready for session (expect reconnect)
Nov 29 06:18:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-9fe01acc49945aeb98bf1aa581ac0a4c85a79ae0dc167f6a1f350b9968249b46-merged.mount: Deactivated successfully.
Nov 29 06:18:44 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e7 do_prune osdmap full prune enabled
Nov 29 06:18:44 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e7 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 29 06:18:44 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 29 06:18:44 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 06:18:44 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 29 06:18:44 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 06:18:44 compute-0 ceph-mgr[74948]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 29 06:18:44 compute-0 ceph-mgr[74948]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 29 06:18:45 compute-0 ceph-mon[74654]: OSD bench result of 3033.995593 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 29 06:18:45 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:18:45 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 06:18:45 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 06:18:45 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:18:45 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:18:45 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:18:45 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Nov 29 06:18:45 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:18:45 compute-0 sudo[87569]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rzberxjidfrqzwjqwotiujfkjrsnjjsw ; /usr/bin/python3'
Nov 29 06:18:45 compute-0 sudo[87569]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:18:45 compute-0 python3[87571]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:18:45 compute-0 ceph-mgr[74948]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/2367106429; not ready for session (expect reconnect)
Nov 29 06:18:45 compute-0 ceph-mgr[74948]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/2704652432; not ready for session (expect reconnect)
Nov 29 06:18:45 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v56: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 06:18:46 compute-0 podman[86439]: 2025-11-29 06:18:46.022719114 +0000 UTC m=+3.979846620 container remove b34fd008fc6133ce07387f660b49a9ab5ab042b12519fa23e83dad8a3c1fc388 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_chandrasekhar, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 29 06:18:46 compute-0 sudo[86336]: pam_unix(sudo:session): session closed for user root
Nov 29 06:18:46 compute-0 systemd[1]: libpod-conmon-b34fd008fc6133ce07387f660b49a9ab5ab042b12519fa23e83dad8a3c1fc388.scope: Deactivated successfully.
Nov 29 06:18:46 compute-0 podman[87573]: 2025-11-29 06:18:46.068257659 +0000 UTC m=+0.471150752 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 06:18:46 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 29 06:18:46 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 06:18:46 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 29 06:18:46 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 06:18:46 compute-0 ceph-mgr[74948]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 29 06:18:46 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 06:18:46 compute-0 ceph-mgr[74948]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 29 06:18:46 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e8 e8: 2 total, 1 up, 2 in
Nov 29 06:18:46 compute-0 ceph-mon[74654]: log_channel(cluster) log [INF] : osd.0 [v2:192.168.122.101:6800/2367106429,v1:192.168.122.101:6801/2367106429] boot
Nov 29 06:18:46 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e8: 2 total, 1 up, 2 in
Nov 29 06:18:46 compute-0 podman[87573]: 2025-11-29 06:18:46.487743991 +0000 UTC m=+0.890637054 container create 0a4a1ef266e3eeb6737e246b1ad1178bc75fba9b36468eca81ce9443bec7f4b8 (image=quay.io/ceph/ceph:v18, name=naughty_shamir, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 06:18:46 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 29 06:18:46 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 06:18:46 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 29 06:18:46 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 06:18:46 compute-0 ceph-mgr[74948]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 29 06:18:46 compute-0 ceph-mgr[74948]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/2704652432; not ready for session (expect reconnect)
Nov 29 06:18:46 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 29 06:18:46 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 06:18:46 compute-0 ceph-mgr[74948]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 29 06:18:46 compute-0 ceph-mon[74654]: Adjusting osd_memory_target on compute-1 to  5247M
Nov 29 06:18:46 compute-0 ceph-mon[74654]: pgmap v55: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 06:18:46 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 06:18:46 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 06:18:46 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 06:18:46 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 06:18:47 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:18:47 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e8 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 06:18:47 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 06:18:47 compute-0 systemd[1]: Started libpod-conmon-0a4a1ef266e3eeb6737e246b1ad1178bc75fba9b36468eca81ce9443bec7f4b8.scope.
Nov 29 06:18:47 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:18:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/155555d5af453f76fad9e26aca879a8f5a0b9ba4d577404fba06c94b0d4b659c/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 29 06:18:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/155555d5af453f76fad9e26aca879a8f5a0b9ba4d577404fba06c94b0d4b659c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:18:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/155555d5af453f76fad9e26aca879a8f5a0b9ba4d577404fba06c94b0d4b659c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:18:47 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:18:47 compute-0 podman[87573]: 2025-11-29 06:18:47.600096988 +0000 UTC m=+2.002990091 container init 0a4a1ef266e3eeb6737e246b1ad1178bc75fba9b36468eca81ce9443bec7f4b8 (image=quay.io/ceph/ceph:v18, name=naughty_shamir, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 06:18:47 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 06:18:47 compute-0 podman[87573]: 2025-11-29 06:18:47.614827347 +0000 UTC m=+2.017720450 container start 0a4a1ef266e3eeb6737e246b1ad1178bc75fba9b36468eca81ce9443bec7f4b8 (image=quay.io/ceph/ceph:v18, name=naughty_shamir, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 29 06:18:47 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:18:47 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 06:18:47 compute-0 podman[87573]: 2025-11-29 06:18:47.654750093 +0000 UTC m=+2.057643156 container attach 0a4a1ef266e3eeb6737e246b1ad1178bc75fba9b36468eca81ce9443bec7f4b8 (image=quay.io/ceph/ceph:v18, name=naughty_shamir, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 06:18:47 compute-0 ceph-mgr[74948]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/2704652432; not ready for session (expect reconnect)
Nov 29 06:18:47 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 29 06:18:47 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 06:18:47 compute-0 ceph-mgr[74948]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 29 06:18:47 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:18:47 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0) v1
Nov 29 06:18:47 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Nov 29 06:18:47 compute-0 ceph-mgr[74948]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 128.0M
Nov 29 06:18:47 compute-0 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 128.0M
Nov 29 06:18:47 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) v1
Nov 29 06:18:47 compute-0 ceph-mgr[74948]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 134220595: error parsing value: Value '134220595' is below minimum 939524096
Nov 29 06:18:47 compute-0 ceph-mgr[74948]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 134220595: error parsing value: Value '134220595' is below minimum 939524096
Nov 29 06:18:47 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e8 do_prune osdmap full prune enabled
Nov 29 06:18:47 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e8 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 29 06:18:47 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e9 e9: 2 total, 1 up, 2 in
Nov 29 06:18:47 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e9: 2 total, 1 up, 2 in
Nov 29 06:18:47 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 29 06:18:47 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 06:18:47 compute-0 ceph-mgr[74948]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 29 06:18:47 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v59: 0 pgs: ; 0 B data, 426 MiB used, 6.6 GiB / 7.0 GiB avail
Nov 29 06:18:47 compute-0 ceph-mon[74654]: pgmap v56: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 06:18:47 compute-0 ceph-mon[74654]: osd.0 [v2:192.168.122.101:6800/2367106429,v1:192.168.122.101:6801/2367106429] boot
Nov 29 06:18:47 compute-0 ceph-mon[74654]: osdmap e8: 2 total, 1 up, 2 in
Nov 29 06:18:47 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 06:18:47 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 06:18:47 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 06:18:47 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:18:47 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:18:47 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:18:47 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 06:18:47 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:18:47 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Nov 29 06:18:48 compute-0 ceph-osd[85162]: osd.1 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 2.751 iops: 704.243 elapsed_sec: 4.260
Nov 29 06:18:48 compute-0 ceph-osd[85162]: log_channel(cluster) log [WRN] : OSD bench result of 704.243090 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 29 06:18:48 compute-0 ceph-osd[85162]: osd.1 0 waiting for initial osdmap
Nov 29 06:18:48 compute-0 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-osd-1[85158]: 2025-11-29T06:18:48.119+0000 7f4f389ba640 -1 osd.1 0 waiting for initial osdmap
Nov 29 06:18:48 compute-0 ceph-osd[85162]: osd.1 9 crush map has features 288514050185494528, adjusting msgr requires for clients
Nov 29 06:18:48 compute-0 ceph-osd[85162]: osd.1 9 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons
Nov 29 06:18:48 compute-0 ceph-osd[85162]: osd.1 9 crush map has features 3314932999778484224, adjusting msgr requires for osds
Nov 29 06:18:48 compute-0 ceph-osd[85162]: osd.1 9 check_osdmap_features require_osd_release unknown -> reef
Nov 29 06:18:48 compute-0 ceph-osd[85162]: osd.1 9 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 29 06:18:48 compute-0 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-osd-1[85158]: 2025-11-29T06:18:48.164+0000 7f4f33fe2640 -1 osd.1 9 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 29 06:18:48 compute-0 ceph-osd[85162]: osd.1 9 set_numa_affinity not setting numa affinity
Nov 29 06:18:48 compute-0 ceph-osd[85162]: osd.1 9 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial
Nov 29 06:18:48 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Nov 29 06:18:48 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4154088777' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 29 06:18:48 compute-0 naughty_shamir[87589]: 
Nov 29 06:18:48 compute-0 naughty_shamir[87589]: {"fsid":"336ec58c-893b-528f-a0c1-6ed1196bc047","health":{"status":"HEALTH_WARN","checks":{"CEPHADM_APPLY_SPEC_FAIL":{"severity":"HEALTH_WARN","summary":{"message":"Failed to apply 2 service(s): mon,mgr","count":2},"muted":false},"CEPHADM_REFRESH_FAILED":{"severity":"HEALTH_WARN","summary":{"message":"failed to probe daemons or devices","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":161,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":9,"num_osds":2,"num_up_osds":1,"osd_up_since":1764397124,"num_in_osds":2,"osd_in_since":1764397101,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-11-29T06:17:55.922038+0000","services":{}},"progress_events":{}}
Nov 29 06:18:48 compute-0 ceph-mgr[74948]: [devicehealth INFO root] creating mgr pool
Nov 29 06:18:48 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} v 0) v1
Nov 29 06:18:48 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Nov 29 06:18:48 compute-0 systemd[1]: libpod-0a4a1ef266e3eeb6737e246b1ad1178bc75fba9b36468eca81ce9443bec7f4b8.scope: Deactivated successfully.
Nov 29 06:18:48 compute-0 podman[87573]: 2025-11-29 06:18:48.304559095 +0000 UTC m=+2.707452198 container died 0a4a1ef266e3eeb6737e246b1ad1178bc75fba9b36468eca81ce9443bec7f4b8 (image=quay.io/ceph/ceph:v18, name=naughty_shamir, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 29 06:18:48 compute-0 ceph-mgr[74948]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/2704652432; not ready for session (expect reconnect)
Nov 29 06:18:48 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 29 06:18:48 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 06:18:48 compute-0 ceph-mgr[74948]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 29 06:18:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-155555d5af453f76fad9e26aca879a8f5a0b9ba4d577404fba06c94b0d4b659c-merged.mount: Deactivated successfully.
Nov 29 06:18:49 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e9 do_prune osdmap full prune enabled
Nov 29 06:18:49 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e9 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 29 06:18:49 compute-0 ceph-osd[85162]: osd.1 9 tick checking mon for new map
Nov 29 06:18:49 compute-0 ceph-mgr[74948]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/2704652432; not ready for session (expect reconnect)
Nov 29 06:18:49 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 29 06:18:49 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 06:18:49 compute-0 ceph-mgr[74948]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 29 06:18:49 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Nov 29 06:18:49 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e10 e10: 2 total, 2 up, 2 in
Nov 29 06:18:49 compute-0 podman[87573]: 2025-11-29 06:18:49.869038584 +0000 UTC m=+4.271931677 container remove 0a4a1ef266e3eeb6737e246b1ad1178bc75fba9b36468eca81ce9443bec7f4b8 (image=quay.io/ceph/ceph:v18, name=naughty_shamir, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 29 06:18:49 compute-0 sudo[87569]: pam_unix(sudo:session): session closed for user root
Nov 29 06:18:49 compute-0 systemd[1]: libpod-conmon-0a4a1ef266e3eeb6737e246b1ad1178bc75fba9b36468eca81ce9443bec7f4b8.scope: Deactivated successfully.
Nov 29 06:18:49 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v60: 0 pgs: ; 0 B data, 426 MiB used, 6.6 GiB / 7.0 GiB avail
Nov 29 06:18:50 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e10 crush map has features 3314933000852226048, adjusting msgr requires
Nov 29 06:18:50 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e10 crush map has features 288514051259236352, adjusting msgr requires
Nov 29 06:18:50 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e10 crush map has features 288514051259236352, adjusting msgr requires
Nov 29 06:18:50 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e10 crush map has features 288514051259236352, adjusting msgr requires
Nov 29 06:18:50 compute-0 ceph-mon[74654]: Adjusting osd_memory_target on compute-0 to 128.0M
Nov 29 06:18:50 compute-0 ceph-mon[74654]: Unable to set osd_memory_target on compute-0 to 134220595: error parsing value: Value '134220595' is below minimum 939524096
Nov 29 06:18:50 compute-0 ceph-mon[74654]: osdmap e9: 2 total, 1 up, 2 in
Nov 29 06:18:50 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 06:18:50 compute-0 ceph-mon[74654]: pgmap v59: 0 pgs: ; 0 B data, 426 MiB used, 6.6 GiB / 7.0 GiB avail
Nov 29 06:18:50 compute-0 ceph-mon[74654]: OSD bench result of 704.243090 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 29 06:18:50 compute-0 ceph-mon[74654]: from='client.? 192.168.122.100:0/4154088777' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 29 06:18:50 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Nov 29 06:18:50 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 06:18:50 compute-0 sudo[87654]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ocssfzyiffvxgxvcocolhfmhgtojpyrh ; /usr/bin/python3'
Nov 29 06:18:50 compute-0 sudo[87654]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:18:50 compute-0 ceph-osd[85162]: osd.1 10 state: booting -> active
Nov 29 06:18:50 compute-0 ceph-osd[85162]: osd.1 10 crush map has features 288514051259236352, adjusting msgr requires for clients
Nov 29 06:18:50 compute-0 ceph-osd[85162]: osd.1 10 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons
Nov 29 06:18:50 compute-0 ceph-osd[85162]: osd.1 10 crush map has features 3314933000852226048, adjusting msgr requires for osds
Nov 29 06:18:50 compute-0 ceph-mon[74654]: log_channel(cluster) log [INF] : osd.1 [v2:192.168.122.100:6802/2704652432,v1:192.168.122.100:6803/2704652432] boot
Nov 29 06:18:50 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e10: 2 total, 2 up, 2 in
Nov 29 06:18:50 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 29 06:18:50 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 06:18:50 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} v 0) v1
Nov 29 06:18:50 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Nov 29 06:18:50 compute-0 python3[87656]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create vms  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:18:50 compute-0 podman[87657]: 2025-11-29 06:18:50.592238443 +0000 UTC m=+0.055084748 container create bffb0b92a6ce8fdb45b613c8fdba6c34e9082950dbee225613a6782cf5784332 (image=quay.io/ceph/ceph:v18, name=interesting_lumiere, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 29 06:18:50 compute-0 podman[87657]: 2025-11-29 06:18:50.563772104 +0000 UTC m=+0.026618469 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 06:18:50 compute-0 systemd[1]: Started libpod-conmon-bffb0b92a6ce8fdb45b613c8fdba6c34e9082950dbee225613a6782cf5784332.scope.
Nov 29 06:18:50 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:18:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13204247c2a1b37eeecf24f2c04e180cd468ed2252d74ee55bfa6fcc1e2d686b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:18:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13204247c2a1b37eeecf24f2c04e180cd468ed2252d74ee55bfa6fcc1e2d686b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:18:50 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e10 do_prune osdmap full prune enabled
Nov 29 06:18:50 compute-0 podman[87657]: 2025-11-29 06:18:50.808415082 +0000 UTC m=+0.271261397 container init bffb0b92a6ce8fdb45b613c8fdba6c34e9082950dbee225613a6782cf5784332 (image=quay.io/ceph/ceph:v18, name=interesting_lumiere, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 06:18:50 compute-0 podman[87657]: 2025-11-29 06:18:50.814969938 +0000 UTC m=+0.277816243 container start bffb0b92a6ce8fdb45b613c8fdba6c34e9082950dbee225613a6782cf5784332 (image=quay.io/ceph/ceph:v18, name=interesting_lumiere, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 29 06:18:51 compute-0 podman[87657]: 2025-11-29 06:18:51.13178245 +0000 UTC m=+0.594628825 container attach bffb0b92a6ce8fdb45b613c8fdba6c34e9082950dbee225613a6782cf5784332 (image=quay.io/ceph/ceph:v18, name=interesting_lumiere, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 29 06:18:51 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Nov 29 06:18:51 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e11 e11: 2 total, 2 up, 2 in
Nov 29 06:18:51 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 06:18:51 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Nov 29 06:18:51 compute-0 ceph-mon[74654]: pgmap v60: 0 pgs: ; 0 B data, 426 MiB used, 6.6 GiB / 7.0 GiB avail
Nov 29 06:18:51 compute-0 ceph-mon[74654]: osd.1 [v2:192.168.122.100:6802/2704652432,v1:192.168.122.100:6803/2704652432] boot
Nov 29 06:18:51 compute-0 ceph-mon[74654]: osdmap e10: 2 total, 2 up, 2 in
Nov 29 06:18:51 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 06:18:51 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Nov 29 06:18:51 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e11: 2 total, 2 up, 2 in
Nov 29 06:18:51 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 29 06:18:51 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3176932223' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 29 06:18:51 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v63: 1 pgs: 1 creating+peering; 0 B data, 853 MiB used, 13 GiB / 14 GiB avail
Nov 29 06:18:52 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e11 do_prune osdmap full prune enabled
Nov 29 06:18:52 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Nov 29 06:18:52 compute-0 ceph-mon[74654]: osdmap e11: 2 total, 2 up, 2 in
Nov 29 06:18:52 compute-0 ceph-mon[74654]: from='client.? 192.168.122.100:0/3176932223' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 29 06:18:52 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3176932223' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 29 06:18:52 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e12 e12: 2 total, 2 up, 2 in
Nov 29 06:18:52 compute-0 interesting_lumiere[87672]: pool 'vms' created
Nov 29 06:18:52 compute-0 systemd[1]: libpod-bffb0b92a6ce8fdb45b613c8fdba6c34e9082950dbee225613a6782cf5784332.scope: Deactivated successfully.
Nov 29 06:18:52 compute-0 podman[87657]: 2025-11-29 06:18:52.769456771 +0000 UTC m=+2.232303066 container died bffb0b92a6ce8fdb45b613c8fdba6c34e9082950dbee225613a6782cf5784332 (image=quay.io/ceph/ceph:v18, name=interesting_lumiere, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 06:18:52 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e12: 2 total, 2 up, 2 in
Nov 29 06:18:52 compute-0 ceph-mgr[74948]: [devicehealth INFO root] creating main.db for devicehealth
Nov 29 06:18:52 compute-0 ceph-mgr[74948]: [devicehealth INFO root] Check health
Nov 29 06:18:52 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Nov 29 06:18:52 compute-0 sudo[87723]:     ceph : PWD=/ ; USER=root ; COMMAND=/usr/sbin/smartctl -x --json=o /dev/vda
Nov 29 06:18:52 compute-0 sudo[87723]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Nov 29 06:18:52 compute-0 sudo[87723]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=167)
Nov 29 06:18:52 compute-0 sudo[87723]: pam_unix(sudo:session): session closed for user root
Nov 29 06:18:52 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Nov 29 06:18:52 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Nov 29 06:18:52 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 29 06:18:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-13204247c2a1b37eeecf24f2c04e180cd468ed2252d74ee55bfa6fcc1e2d686b-merged.mount: Deactivated successfully.
Nov 29 06:18:53 compute-0 podman[87657]: 2025-11-29 06:18:53.124029956 +0000 UTC m=+2.586876271 container remove bffb0b92a6ce8fdb45b613c8fdba6c34e9082950dbee225613a6782cf5784332 (image=quay.io/ceph/ceph:v18, name=interesting_lumiere, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 29 06:18:53 compute-0 systemd[1]: libpod-conmon-bffb0b92a6ce8fdb45b613c8fdba6c34e9082950dbee225613a6782cf5784332.scope: Deactivated successfully.
Nov 29 06:18:53 compute-0 sudo[87654]: pam_unix(sudo:session): session closed for user root
Nov 29 06:18:53 compute-0 sudo[87751]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kkyvstnqqlpjxyfusdxpazxlgfwjhuib ; /usr/bin/python3'
Nov 29 06:18:53 compute-0 sudo[87751]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:18:53 compute-0 python3[87753]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create volumes  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:18:53 compute-0 podman[87754]: 2025-11-29 06:18:53.515151181 +0000 UTC m=+0.032608309 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 06:18:53 compute-0 ceph-mon[74654]: pgmap v63: 1 pgs: 1 creating+peering; 0 B data, 853 MiB used, 13 GiB / 14 GiB avail
Nov 29 06:18:53 compute-0 podman[87754]: 2025-11-29 06:18:53.637995135 +0000 UTC m=+0.155452183 container create f7d3b015409ce580acc72cb125109a0b4a7018ac0f482feb3b01a8ac019ce8a7 (image=quay.io/ceph/ceph:v18, name=zen_hellman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 06:18:53 compute-0 ceph-mon[74654]: from='client.? 192.168.122.100:0/3176932223' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 29 06:18:53 compute-0 ceph-mon[74654]: osdmap e12: 2 total, 2 up, 2 in
Nov 29 06:18:53 compute-0 ceph-mon[74654]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Nov 29 06:18:53 compute-0 ceph-mon[74654]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Nov 29 06:18:53 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 29 06:18:53 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e12 do_prune osdmap full prune enabled
Nov 29 06:18:53 compute-0 systemd[1]: Started libpod-conmon-f7d3b015409ce580acc72cb125109a0b4a7018ac0f482feb3b01a8ac019ce8a7.scope.
Nov 29 06:18:53 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:18:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d73f6a99c03753a0035c62966fa288edab79721476d4e11bbe831a912091b14/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:18:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d73f6a99c03753a0035c62966fa288edab79721476d4e11bbe831a912091b14/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:18:53 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e13 e13: 2 total, 2 up, 2 in
Nov 29 06:18:53 compute-0 podman[87754]: 2025-11-29 06:18:53.798222262 +0000 UTC m=+0.315679400 container init f7d3b015409ce580acc72cb125109a0b4a7018ac0f482feb3b01a8ac019ce8a7 (image=quay.io/ceph/ceph:v18, name=zen_hellman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 06:18:53 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e13: 2 total, 2 up, 2 in
Nov 29 06:18:53 compute-0 podman[87754]: 2025-11-29 06:18:53.803704318 +0000 UTC m=+0.321161366 container start f7d3b015409ce580acc72cb125109a0b4a7018ac0f482feb3b01a8ac019ce8a7 (image=quay.io/ceph/ceph:v18, name=zen_hellman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 29 06:18:53 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v66: 2 pgs: 1 unknown, 1 creating+peering; 0 B data, 853 MiB used, 13 GiB / 14 GiB avail
Nov 29 06:18:54 compute-0 podman[87754]: 2025-11-29 06:18:54.03968624 +0000 UTC m=+0.557143338 container attach f7d3b015409ce580acc72cb125109a0b4a7018ac0f482feb3b01a8ac019ce8a7 (image=quay.io/ceph/ceph:v18, name=zen_hellman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Nov 29 06:18:54 compute-0 ceph-mgr[74948]: [balancer INFO root] Optimize plan auto_2025-11-29_06:18:54
Nov 29 06:18:54 compute-0 ceph-mgr[74948]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 06:18:54 compute-0 ceph-mgr[74948]: [balancer INFO root] Some PGs (0.500000) are unknown; try again later
Nov 29 06:18:54 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 06:18:54 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 15023996928
Nov 29 06:18:54 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 1 (current 1)
Nov 29 06:18:54 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 15023996928
Nov 29 06:18:54 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 29 06:18:54 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} v 0) v1
Nov 29 06:18:54 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 06:18:54 compute-0 ceph-mgr[74948]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 06:18:54 compute-0 ceph-mgr[74948]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 06:18:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:18:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:18:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:18:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:18:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:18:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:18:54 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 29 06:18:54 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/577122409' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 29 06:18:54 compute-0 ceph-mon[74654]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 29 06:18:54 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e13 do_prune osdmap full prune enabled
Nov 29 06:18:54 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Nov 29 06:18:54 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/577122409' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 29 06:18:54 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e14 e14: 2 total, 2 up, 2 in
Nov 29 06:18:54 compute-0 zen_hellman[87769]: pool 'volumes' created
Nov 29 06:18:54 compute-0 ceph-mon[74654]: osdmap e13: 2 total, 2 up, 2 in
Nov 29 06:18:54 compute-0 ceph-mon[74654]: pgmap v66: 2 pgs: 1 unknown, 1 creating+peering; 0 B data, 853 MiB used, 13 GiB / 14 GiB avail
Nov 29 06:18:54 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 06:18:54 compute-0 ceph-mon[74654]: from='client.? 192.168.122.100:0/577122409' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 29 06:18:54 compute-0 systemd[1]: libpod-f7d3b015409ce580acc72cb125109a0b4a7018ac0f482feb3b01a8ac019ce8a7.scope: Deactivated successfully.
Nov 29 06:18:54 compute-0 podman[87754]: 2025-11-29 06:18:54.899531405 +0000 UTC m=+1.416988493 container died f7d3b015409ce580acc72cb125109a0b4a7018ac0f482feb3b01a8ac019ce8a7 (image=quay.io/ceph/ceph:v18, name=zen_hellman, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 29 06:18:54 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : mgrmap e8: compute-0.vxabpq(active, since 2m)
Nov 29 06:18:54 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e14: 2 total, 2 up, 2 in
Nov 29 06:18:54 compute-0 ceph-mgr[74948]: [progress INFO root] update: starting ev 064b1892-32fb-43cc-8532-5dc790b59bb3 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Nov 29 06:18:54 compute-0 ceph-mgr[74948]: [progress INFO root] complete: finished ev 064b1892-32fb-43cc-8532-5dc790b59bb3 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Nov 29 06:18:54 compute-0 ceph-mgr[74948]: [progress INFO root] Completed event 064b1892-32fb-43cc-8532-5dc790b59bb3 (PG autoscaler increasing pool 2 PGs from 1 to 32) in 0 seconds
Nov 29 06:18:54 compute-0 ceph-mgr[74948]: [progress INFO root] Writing back 3 completed events
Nov 29 06:18:54 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 29 06:18:54 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:18:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-3d73f6a99c03753a0035c62966fa288edab79721476d4e11bbe831a912091b14-merged.mount: Deactivated successfully.
Nov 29 06:18:54 compute-0 podman[87754]: 2025-11-29 06:18:54.963646359 +0000 UTC m=+1.481103437 container remove f7d3b015409ce580acc72cb125109a0b4a7018ac0f482feb3b01a8ac019ce8a7 (image=quay.io/ceph/ceph:v18, name=zen_hellman, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Nov 29 06:18:54 compute-0 systemd[1]: libpod-conmon-f7d3b015409ce580acc72cb125109a0b4a7018ac0f482feb3b01a8ac019ce8a7.scope: Deactivated successfully.
Nov 29 06:18:55 compute-0 sudo[87751]: pam_unix(sudo:session): session closed for user root
Nov 29 06:18:55 compute-0 sudo[87832]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vqexasoxdsfhwmpyrxjbfcbgwcbxthnn ; /usr/bin/python3'
Nov 29 06:18:55 compute-0 sudo[87832]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:18:55 compute-0 python3[87834]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create backups  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:18:55 compute-0 podman[87835]: 2025-11-29 06:18:55.451288059 +0000 UTC m=+0.041009177 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 06:18:55 compute-0 podman[87835]: 2025-11-29 06:18:55.81272956 +0000 UTC m=+0.402450688 container create 49a1ab6331bc7fc6abc9d7c6b744c56022f3e08a5e92e2069145227cf902de36 (image=quay.io/ceph/ceph:v18, name=recursing_blackwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 29 06:18:55 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 14 pg[3.0( empty local-lis/les=0/0 n=0 ec=14/14 lis/c=0/0 les/c/f=0/0/0 sis=14) [1] r=0 lpr=14 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:18:55 compute-0 systemd[1]: Started libpod-conmon-49a1ab6331bc7fc6abc9d7c6b744c56022f3e08a5e92e2069145227cf902de36.scope.
Nov 29 06:18:55 compute-0 ceph-mon[74654]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 29 06:18:55 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Nov 29 06:18:55 compute-0 ceph-mon[74654]: from='client.? 192.168.122.100:0/577122409' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 29 06:18:55 compute-0 ceph-mon[74654]: mgrmap e8: compute-0.vxabpq(active, since 2m)
Nov 29 06:18:55 compute-0 ceph-mon[74654]: osdmap e14: 2 total, 2 up, 2 in
Nov 29 06:18:55 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:18:55 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:18:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/125fab57deb9e97712df3812e2315839f5287fa47af0f78436e0e9a16e8d8a0d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:18:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/125fab57deb9e97712df3812e2315839f5287fa47af0f78436e0e9a16e8d8a0d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:18:55 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e14 do_prune osdmap full prune enabled
Nov 29 06:18:55 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v68: 3 pgs: 2 unknown, 1 creating+peering; 0 B data, 453 MiB used, 14 GiB / 14 GiB avail
Nov 29 06:18:55 compute-0 podman[87835]: 2025-11-29 06:18:55.957453876 +0000 UTC m=+0.547174984 container init 49a1ab6331bc7fc6abc9d7c6b744c56022f3e08a5e92e2069145227cf902de36 (image=quay.io/ceph/ceph:v18, name=recursing_blackwell, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 29 06:18:55 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e15 e15: 2 total, 2 up, 2 in
Nov 29 06:18:55 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e15: 2 total, 2 up, 2 in
Nov 29 06:18:55 compute-0 podman[87835]: 2025-11-29 06:18:55.96602135 +0000 UTC m=+0.555742428 container start 49a1ab6331bc7fc6abc9d7c6b744c56022f3e08a5e92e2069145227cf902de36 (image=quay.io/ceph/ceph:v18, name=recursing_blackwell, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 06:18:55 compute-0 podman[87835]: 2025-11-29 06:18:55.970464236 +0000 UTC m=+0.560185324 container attach 49a1ab6331bc7fc6abc9d7c6b744c56022f3e08a5e92e2069145227cf902de36 (image=quay.io/ceph/ceph:v18, name=recursing_blackwell, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 06:18:55 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 15 pg[3.0( empty local-lis/les=14/15 n=0 ec=14/14 lis/c=0/0 les/c/f=0/0/0 sis=14) [1] r=0 lpr=14 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:18:56 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 29 06:18:56 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1457732535' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 29 06:18:57 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e15 do_prune osdmap full prune enabled
Nov 29 06:18:57 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1457732535' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 29 06:18:57 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e16 e16: 2 total, 2 up, 2 in
Nov 29 06:18:57 compute-0 recursing_blackwell[87850]: pool 'backups' created
Nov 29 06:18:57 compute-0 ceph-mon[74654]: pgmap v68: 3 pgs: 2 unknown, 1 creating+peering; 0 B data, 453 MiB used, 14 GiB / 14 GiB avail
Nov 29 06:18:57 compute-0 ceph-mon[74654]: osdmap e15: 2 total, 2 up, 2 in
Nov 29 06:18:57 compute-0 ceph-mon[74654]: from='client.? 192.168.122.100:0/1457732535' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 29 06:18:57 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e16: 2 total, 2 up, 2 in
Nov 29 06:18:57 compute-0 systemd[1]: libpod-49a1ab6331bc7fc6abc9d7c6b744c56022f3e08a5e92e2069145227cf902de36.scope: Deactivated successfully.
Nov 29 06:18:57 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 16 pg[4.0( empty local-lis/les=0/0 n=0 ec=16/16 lis/c=0/0 les/c/f=0/0/0 sis=16) [1] r=0 lpr=16 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:18:57 compute-0 podman[87835]: 2025-11-29 06:18:57.398105883 +0000 UTC m=+1.987826971 container died 49a1ab6331bc7fc6abc9d7c6b744c56022f3e08a5e92e2069145227cf902de36 (image=quay.io/ceph/ceph:v18, name=recursing_blackwell, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 06:18:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-125fab57deb9e97712df3812e2315839f5287fa47af0f78436e0e9a16e8d8a0d-merged.mount: Deactivated successfully.
Nov 29 06:18:57 compute-0 podman[87835]: 2025-11-29 06:18:57.761026874 +0000 UTC m=+2.350747962 container remove 49a1ab6331bc7fc6abc9d7c6b744c56022f3e08a5e92e2069145227cf902de36 (image=quay.io/ceph/ceph:v18, name=recursing_blackwell, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 06:18:57 compute-0 systemd[1]: libpod-conmon-49a1ab6331bc7fc6abc9d7c6b744c56022f3e08a5e92e2069145227cf902de36.scope: Deactivated successfully.
Nov 29 06:18:57 compute-0 sudo[87832]: pam_unix(sudo:session): session closed for user root
Nov 29 06:18:57 compute-0 sudo[87913]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yjwwyochucngjabgddgdqtgzlgmnytve ; /usr/bin/python3'
Nov 29 06:18:57 compute-0 sudo[87913]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:18:57 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v71: 4 pgs: 2 active+clean, 2 unknown; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 06:18:57 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 29 06:18:57 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 06:18:58 compute-0 python3[87915]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create images  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:18:58 compute-0 podman[87916]: 2025-11-29 06:18:58.089326382 +0000 UTC m=+0.020018611 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 06:18:58 compute-0 podman[87916]: 2025-11-29 06:18:58.205138996 +0000 UTC m=+0.135831245 container create fc728f0e24d25d5215296ff5620eeb4b71b82f3fefad4c243cee7bc26fb28a8f (image=quay.io/ceph/ceph:v18, name=dazzling_goldwasser, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 06:18:58 compute-0 systemd[1]: Started libpod-conmon-fc728f0e24d25d5215296ff5620eeb4b71b82f3fefad4c243cee7bc26fb28a8f.scope.
Nov 29 06:18:58 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:18:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00acfd42f547aaeb77ac6393d6fd6c41b796415918d47ba54ef90c269849cb73/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:18:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00acfd42f547aaeb77ac6393d6fd6c41b796415918d47ba54ef90c269849cb73/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:18:58 compute-0 podman[87916]: 2025-11-29 06:18:58.341512895 +0000 UTC m=+0.272205194 container init fc728f0e24d25d5215296ff5620eeb4b71b82f3fefad4c243cee7bc26fb28a8f (image=quay.io/ceph/ceph:v18, name=dazzling_goldwasser, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 06:18:58 compute-0 podman[87916]: 2025-11-29 06:18:58.349844392 +0000 UTC m=+0.280536641 container start fc728f0e24d25d5215296ff5620eeb4b71b82f3fefad4c243cee7bc26fb28a8f (image=quay.io/ceph/ceph:v18, name=dazzling_goldwasser, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 29 06:18:58 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e16 do_prune osdmap full prune enabled
Nov 29 06:18:58 compute-0 podman[87916]: 2025-11-29 06:18:58.548118301 +0000 UTC m=+0.478810550 container attach fc728f0e24d25d5215296ff5620eeb4b71b82f3fefad4c243cee7bc26fb28a8f (image=quay.io/ceph/ceph:v18, name=dazzling_goldwasser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 06:18:58 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 06:18:58 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e17 e17: 2 total, 2 up, 2 in
Nov 29 06:18:58 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e17: 2 total, 2 up, 2 in
Nov 29 06:18:59 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 17 pg[4.0( empty local-lis/les=16/17 n=0 ec=16/16 lis/c=0/0 les/c/f=0/0/0 sis=16) [1] r=0 lpr=16 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:18:59 compute-0 ceph-mon[74654]: from='client.? 192.168.122.100:0/1457732535' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 29 06:18:59 compute-0 ceph-mon[74654]: osdmap e16: 2 total, 2 up, 2 in
Nov 29 06:18:59 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 06:18:59 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 29 06:18:59 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2491487437' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 29 06:18:59 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e17 do_prune osdmap full prune enabled
Nov 29 06:18:59 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2491487437' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 29 06:18:59 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e18 e18: 2 total, 2 up, 2 in
Nov 29 06:18:59 compute-0 dazzling_goldwasser[87932]: pool 'images' created
Nov 29 06:18:59 compute-0 systemd[1]: libpod-fc728f0e24d25d5215296ff5620eeb4b71b82f3fefad4c243cee7bc26fb28a8f.scope: Deactivated successfully.
Nov 29 06:18:59 compute-0 podman[87916]: 2025-11-29 06:18:59.741276719 +0000 UTC m=+1.671968968 container died fc728f0e24d25d5215296ff5620eeb4b71b82f3fefad4c243cee7bc26fb28a8f (image=quay.io/ceph/ceph:v18, name=dazzling_goldwasser, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 06:18:59 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e18: 2 total, 2 up, 2 in
Nov 29 06:18:59 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v74: 36 pgs: 2 active+clean, 34 unknown; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 06:19:00 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 18 pg[5.0( empty local-lis/les=0/0 n=0 ec=18/18 lis/c=0/0 les/c/f=0/0/0 sis=18) [1] r=0 lpr=18 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:19:00 compute-0 ceph-mon[74654]: pgmap v71: 4 pgs: 2 active+clean, 2 unknown; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 06:19:00 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 06:19:00 compute-0 ceph-mon[74654]: osdmap e17: 2 total, 2 up, 2 in
Nov 29 06:19:00 compute-0 ceph-mon[74654]: from='client.? 192.168.122.100:0/2491487437' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 29 06:19:00 compute-0 ceph-mon[74654]: from='client.? 192.168.122.100:0/2491487437' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 29 06:19:00 compute-0 ceph-mon[74654]: osdmap e18: 2 total, 2 up, 2 in
Nov 29 06:19:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-00acfd42f547aaeb77ac6393d6fd6c41b796415918d47ba54ef90c269849cb73-merged.mount: Deactivated successfully.
Nov 29 06:19:00 compute-0 podman[87916]: 2025-11-29 06:19:00.613450857 +0000 UTC m=+2.544143066 container remove fc728f0e24d25d5215296ff5620eeb4b71b82f3fefad4c243cee7bc26fb28a8f (image=quay.io/ceph/ceph:v18, name=dazzling_goldwasser, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 06:19:00 compute-0 systemd[1]: libpod-conmon-fc728f0e24d25d5215296ff5620eeb4b71b82f3fefad4c243cee7bc26fb28a8f.scope: Deactivated successfully.
Nov 29 06:19:00 compute-0 sudo[87913]: pam_unix(sudo:session): session closed for user root
Nov 29 06:19:00 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e18 do_prune osdmap full prune enabled
Nov 29 06:19:00 compute-0 ceph-mon[74654]: log_channel(cluster) log [WRN] : Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 29 06:19:00 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e19 e19: 2 total, 2 up, 2 in
Nov 29 06:19:00 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e19: 2 total, 2 up, 2 in
Nov 29 06:19:00 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 19 pg[5.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=0/0 les/c/f=0/0/0 sis=18) [1] r=0 lpr=18 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:19:00 compute-0 sudo[87994]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kpgnjgwqmlblzeonzonbxajvrybykzkv ; /usr/bin/python3'
Nov 29 06:19:00 compute-0 sudo[87994]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:19:00 compute-0 python3[87996]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.meta  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:19:01 compute-0 podman[87997]: 2025-11-29 06:19:01.087622234 +0000 UTC m=+0.132591753 container create e88f02180bc7f94f6e1d7eee84f70a5ce5216927e7f262cec01b9f961c297f4d (image=quay.io/ceph/ceph:v18, name=sad_wing, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 29 06:19:01 compute-0 podman[87997]: 2025-11-29 06:19:00.995414211 +0000 UTC m=+0.040383790 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 06:19:01 compute-0 systemd[76267]: Starting Mark boot as successful...
Nov 29 06:19:01 compute-0 systemd[76267]: Finished Mark boot as successful.
Nov 29 06:19:01 compute-0 systemd[1]: Started libpod-conmon-e88f02180bc7f94f6e1d7eee84f70a5ce5216927e7f262cec01b9f961c297f4d.scope.
Nov 29 06:19:01 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:19:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/801e0b9f9daf6ae2f95eea0301b5348fd1e0467b5fab5ae2935d466989fd1d7b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:19:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/801e0b9f9daf6ae2f95eea0301b5348fd1e0467b5fab5ae2935d466989fd1d7b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:19:01 compute-0 podman[87997]: 2025-11-29 06:19:01.23309285 +0000 UTC m=+0.278062419 container init e88f02180bc7f94f6e1d7eee84f70a5ce5216927e7f262cec01b9f961c297f4d (image=quay.io/ceph/ceph:v18, name=sad_wing, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 06:19:01 compute-0 podman[87997]: 2025-11-29 06:19:01.239325578 +0000 UTC m=+0.284295107 container start e88f02180bc7f94f6e1d7eee84f70a5ce5216927e7f262cec01b9f961c297f4d (image=quay.io/ceph/ceph:v18, name=sad_wing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 29 06:19:01 compute-0 podman[87997]: 2025-11-29 06:19:01.265323997 +0000 UTC m=+0.310293486 container attach e88f02180bc7f94f6e1d7eee84f70a5ce5216927e7f262cec01b9f961c297f4d (image=quay.io/ceph/ceph:v18, name=sad_wing, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 29 06:19:01 compute-0 ceph-mon[74654]: pgmap v74: 36 pgs: 2 active+clean, 34 unknown; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 06:19:01 compute-0 ceph-mon[74654]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 29 06:19:01 compute-0 ceph-mon[74654]: osdmap e19: 2 total, 2 up, 2 in
Nov 29 06:19:01 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 29 06:19:01 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2900095816' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 29 06:19:01 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e19 do_prune osdmap full prune enabled
Nov 29 06:19:01 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2900095816' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 29 06:19:01 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e20 e20: 2 total, 2 up, 2 in
Nov 29 06:19:01 compute-0 sad_wing[88013]: pool 'cephfs.cephfs.meta' created
Nov 29 06:19:01 compute-0 systemd[1]: libpod-e88f02180bc7f94f6e1d7eee84f70a5ce5216927e7f262cec01b9f961c297f4d.scope: Deactivated successfully.
Nov 29 06:19:01 compute-0 podman[87997]: 2025-11-29 06:19:01.834045103 +0000 UTC m=+0.879014632 container died e88f02180bc7f94f6e1d7eee84f70a5ce5216927e7f262cec01b9f961c297f4d (image=quay.io/ceph/ceph:v18, name=sad_wing, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 06:19:01 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e20: 2 total, 2 up, 2 in
Nov 29 06:19:01 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v77: 37 pgs: 1 unknown, 1 creating+peering, 35 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 06:19:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-801e0b9f9daf6ae2f95eea0301b5348fd1e0467b5fab5ae2935d466989fd1d7b-merged.mount: Deactivated successfully.
Nov 29 06:19:02 compute-0 podman[87997]: 2025-11-29 06:19:02.170660777 +0000 UTC m=+1.215630276 container remove e88f02180bc7f94f6e1d7eee84f70a5ce5216927e7f262cec01b9f961c297f4d (image=quay.io/ceph/ceph:v18, name=sad_wing, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 06:19:02 compute-0 systemd[1]: libpod-conmon-e88f02180bc7f94f6e1d7eee84f70a5ce5216927e7f262cec01b9f961c297f4d.scope: Deactivated successfully.
Nov 29 06:19:02 compute-0 sudo[87994]: pam_unix(sudo:session): session closed for user root
Nov 29 06:19:02 compute-0 sudo[88075]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ptdpufjzwqtbedhethcdqmnjbfcjhzrs ; /usr/bin/python3'
Nov 29 06:19:02 compute-0 sudo[88075]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:19:02 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e20 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 06:19:02 compute-0 python3[88077]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.data  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:19:02 compute-0 ceph-mon[74654]: 2.1 deep-scrub starts
Nov 29 06:19:02 compute-0 ceph-mon[74654]: 2.1 deep-scrub ok
Nov 29 06:19:02 compute-0 ceph-mon[74654]: from='client.? 192.168.122.100:0/2900095816' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 29 06:19:02 compute-0 ceph-mon[74654]: from='client.? 192.168.122.100:0/2900095816' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 29 06:19:02 compute-0 ceph-mon[74654]: osdmap e20: 2 total, 2 up, 2 in
Nov 29 06:19:02 compute-0 podman[88078]: 2025-11-29 06:19:02.563259614 +0000 UTC m=+0.051179527 container create 8d407e643af761493fbf34c0af0a9b8360abd27652467a0355c6ca887d654898 (image=quay.io/ceph/ceph:v18, name=nifty_murdock, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True)
Nov 29 06:19:02 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 20 pg[6.0( empty local-lis/les=0/0 n=0 ec=20/20 lis/c=0/0 les/c/f=0/0/0 sis=20) [1] r=0 lpr=20 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:19:02 compute-0 systemd[1]: Started libpod-conmon-8d407e643af761493fbf34c0af0a9b8360abd27652467a0355c6ca887d654898.scope.
Nov 29 06:19:02 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:19:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad99c641ec54b1c1331d492a2c4bbce18a4402a11c3a659e288cdaaa6ddb66fa/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:19:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad99c641ec54b1c1331d492a2c4bbce18a4402a11c3a659e288cdaaa6ddb66fa/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:19:02 compute-0 podman[88078]: 2025-11-29 06:19:02.54166323 +0000 UTC m=+0.029583153 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 06:19:02 compute-0 podman[88078]: 2025-11-29 06:19:02.637749063 +0000 UTC m=+0.125668996 container init 8d407e643af761493fbf34c0af0a9b8360abd27652467a0355c6ca887d654898 (image=quay.io/ceph/ceph:v18, name=nifty_murdock, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 29 06:19:02 compute-0 podman[88078]: 2025-11-29 06:19:02.645448772 +0000 UTC m=+0.133368685 container start 8d407e643af761493fbf34c0af0a9b8360abd27652467a0355c6ca887d654898 (image=quay.io/ceph/ceph:v18, name=nifty_murdock, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 06:19:02 compute-0 podman[88078]: 2025-11-29 06:19:02.6482059 +0000 UTC m=+0.136125803 container attach 8d407e643af761493fbf34c0af0a9b8360abd27652467a0355c6ca887d654898 (image=quay.io/ceph/ceph:v18, name=nifty_murdock, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 06:19:03 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 29 06:19:03 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/956031255' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 29 06:19:03 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e20 do_prune osdmap full prune enabled
Nov 29 06:19:03 compute-0 ceph-mon[74654]: pgmap v77: 37 pgs: 1 unknown, 1 creating+peering, 35 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 06:19:03 compute-0 ceph-mon[74654]: 2.2 scrub starts
Nov 29 06:19:03 compute-0 ceph-mon[74654]: 2.2 scrub ok
Nov 29 06:19:03 compute-0 ceph-mon[74654]: from='client.? 192.168.122.100:0/956031255' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 29 06:19:03 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v78: 37 pgs: 1 unknown, 1 creating+peering, 35 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 06:19:04 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/956031255' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 29 06:19:04 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e21 e21: 2 total, 2 up, 2 in
Nov 29 06:19:04 compute-0 nifty_murdock[88093]: pool 'cephfs.cephfs.data' created
Nov 29 06:19:04 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e21: 2 total, 2 up, 2 in
Nov 29 06:19:04 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 21 pg[6.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=0/0 les/c/f=0/0/0 sis=20) [1] r=0 lpr=20 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:19:04 compute-0 systemd[1]: libpod-8d407e643af761493fbf34c0af0a9b8360abd27652467a0355c6ca887d654898.scope: Deactivated successfully.
Nov 29 06:19:04 compute-0 podman[88078]: 2025-11-29 06:19:04.423480235 +0000 UTC m=+1.911400158 container died 8d407e643af761493fbf34c0af0a9b8360abd27652467a0355c6ca887d654898 (image=quay.io/ceph/ceph:v18, name=nifty_murdock, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 29 06:19:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-ad99c641ec54b1c1331d492a2c4bbce18a4402a11c3a659e288cdaaa6ddb66fa-merged.mount: Deactivated successfully.
Nov 29 06:19:04 compute-0 podman[88078]: 2025-11-29 06:19:04.584165135 +0000 UTC m=+2.072085048 container remove 8d407e643af761493fbf34c0af0a9b8360abd27652467a0355c6ca887d654898 (image=quay.io/ceph/ceph:v18, name=nifty_murdock, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 29 06:19:04 compute-0 systemd[1]: libpod-conmon-8d407e643af761493fbf34c0af0a9b8360abd27652467a0355c6ca887d654898.scope: Deactivated successfully.
Nov 29 06:19:04 compute-0 sudo[88075]: pam_unix(sudo:session): session closed for user root
Nov 29 06:19:04 compute-0 sudo[88154]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rajhzhezintgpgsdwbkwbmjhilfahcsw ; /usr/bin/python3'
Nov 29 06:19:04 compute-0 sudo[88154]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:19:04 compute-0 python3[88156]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable vms rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:19:04 compute-0 podman[88157]: 2025-11-29 06:19:04.980210158 +0000 UTC m=+0.083846956 container create 229057f816909ce5d8250abdce712e3ab6f957f857002a2879b59841430c0a72 (image=quay.io/ceph/ceph:v18, name=ecstatic_gauss, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3)
Nov 29 06:19:05 compute-0 systemd[1]: Started libpod-conmon-229057f816909ce5d8250abdce712e3ab6f957f857002a2879b59841430c0a72.scope.
Nov 29 06:19:05 compute-0 podman[88157]: 2025-11-29 06:19:04.934589811 +0000 UTC m=+0.038226689 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 06:19:05 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:19:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d455d7c19516f932edba8bbc8679b59738a39f0026f5f872814f88a735cf506/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:19:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d455d7c19516f932edba8bbc8679b59738a39f0026f5f872814f88a735cf506/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:19:05 compute-0 podman[88157]: 2025-11-29 06:19:05.071856085 +0000 UTC m=+0.175492933 container init 229057f816909ce5d8250abdce712e3ab6f957f857002a2879b59841430c0a72 (image=quay.io/ceph/ceph:v18, name=ecstatic_gauss, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 29 06:19:05 compute-0 podman[88157]: 2025-11-29 06:19:05.08294034 +0000 UTC m=+0.186577138 container start 229057f816909ce5d8250abdce712e3ab6f957f857002a2879b59841430c0a72 (image=quay.io/ceph/ceph:v18, name=ecstatic_gauss, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True)
Nov 29 06:19:05 compute-0 podman[88157]: 2025-11-29 06:19:05.087006906 +0000 UTC m=+0.190643694 container attach 229057f816909ce5d8250abdce712e3ab6f957f857002a2879b59841430c0a72 (image=quay.io/ceph/ceph:v18, name=ecstatic_gauss, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 06:19:05 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e21 do_prune osdmap full prune enabled
Nov 29 06:19:05 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e22 e22: 2 total, 2 up, 2 in
Nov 29 06:19:05 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e22: 2 total, 2 up, 2 in
Nov 29 06:19:05 compute-0 ceph-mon[74654]: pgmap v78: 37 pgs: 1 unknown, 1 creating+peering, 35 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 06:19:05 compute-0 ceph-mon[74654]: 2.3 scrub starts
Nov 29 06:19:05 compute-0 ceph-mon[74654]: 2.3 scrub ok
Nov 29 06:19:05 compute-0 ceph-mon[74654]: from='client.? 192.168.122.100:0/956031255' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 29 06:19:05 compute-0 ceph-mon[74654]: osdmap e21: 2 total, 2 up, 2 in
Nov 29 06:19:05 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} v 0) v1
Nov 29 06:19:05 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2774593808' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Nov 29 06:19:05 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v81: 38 pgs: 1 unknown, 37 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 06:19:06 compute-0 ceph-mon[74654]: log_channel(cluster) log [WRN] : Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 29 06:19:06 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e22 do_prune osdmap full prune enabled
Nov 29 06:19:06 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2774593808' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Nov 29 06:19:06 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e23 e23: 2 total, 2 up, 2 in
Nov 29 06:19:06 compute-0 ecstatic_gauss[88172]: enabled application 'rbd' on pool 'vms'
Nov 29 06:19:06 compute-0 ceph-mon[74654]: 2.4 scrub starts
Nov 29 06:19:06 compute-0 ceph-mon[74654]: 2.4 scrub ok
Nov 29 06:19:06 compute-0 ceph-mon[74654]: osdmap e22: 2 total, 2 up, 2 in
Nov 29 06:19:06 compute-0 ceph-mon[74654]: from='client.? 192.168.122.100:0/2774593808' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Nov 29 06:19:06 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e23: 2 total, 2 up, 2 in
Nov 29 06:19:06 compute-0 systemd[1]: libpod-229057f816909ce5d8250abdce712e3ab6f957f857002a2879b59841430c0a72.scope: Deactivated successfully.
Nov 29 06:19:06 compute-0 podman[88157]: 2025-11-29 06:19:06.441580115 +0000 UTC m=+1.545216923 container died 229057f816909ce5d8250abdce712e3ab6f957f857002a2879b59841430c0a72 (image=quay.io/ceph/ceph:v18, name=ecstatic_gauss, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 06:19:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-5d455d7c19516f932edba8bbc8679b59738a39f0026f5f872814f88a735cf506-merged.mount: Deactivated successfully.
Nov 29 06:19:06 compute-0 podman[88157]: 2025-11-29 06:19:06.495709744 +0000 UTC m=+1.599346562 container remove 229057f816909ce5d8250abdce712e3ab6f957f857002a2879b59841430c0a72 (image=quay.io/ceph/ceph:v18, name=ecstatic_gauss, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 29 06:19:06 compute-0 systemd[1]: libpod-conmon-229057f816909ce5d8250abdce712e3ab6f957f857002a2879b59841430c0a72.scope: Deactivated successfully.
Nov 29 06:19:06 compute-0 sudo[88154]: pam_unix(sudo:session): session closed for user root
Nov 29 06:19:06 compute-0 sudo[88234]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nwcgyozpdsuxezlhwxdvmhawwkekbdtq ; /usr/bin/python3'
Nov 29 06:19:06 compute-0 sudo[88234]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:19:06 compute-0 python3[88236]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable volumes rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:19:06 compute-0 podman[88237]: 2025-11-29 06:19:06.954255607 +0000 UTC m=+0.067834271 container create 68a3fb6205ea7076db3c8141e87379055be64c04bc0b1fb605197207bf1d2f86 (image=quay.io/ceph/ceph:v18, name=zealous_hoover, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 06:19:06 compute-0 systemd[1]: Started libpod-conmon-68a3fb6205ea7076db3c8141e87379055be64c04bc0b1fb605197207bf1d2f86.scope.
Nov 29 06:19:07 compute-0 podman[88237]: 2025-11-29 06:19:06.927749033 +0000 UTC m=+0.041327787 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 06:19:07 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:19:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e12691d85c0ff373de9fe75d7c93f547175ca6c43b8cde3faa8eb20be74953ca/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:19:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e12691d85c0ff373de9fe75d7c93f547175ca6c43b8cde3faa8eb20be74953ca/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:19:07 compute-0 podman[88237]: 2025-11-29 06:19:07.043458754 +0000 UTC m=+0.157037508 container init 68a3fb6205ea7076db3c8141e87379055be64c04bc0b1fb605197207bf1d2f86 (image=quay.io/ceph/ceph:v18, name=zealous_hoover, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 29 06:19:07 compute-0 podman[88237]: 2025-11-29 06:19:07.053088688 +0000 UTC m=+0.166667382 container start 68a3fb6205ea7076db3c8141e87379055be64c04bc0b1fb605197207bf1d2f86 (image=quay.io/ceph/ceph:v18, name=zealous_hoover, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True)
Nov 29 06:19:07 compute-0 podman[88237]: 2025-11-29 06:19:07.056977219 +0000 UTC m=+0.170555923 container attach 68a3fb6205ea7076db3c8141e87379055be64c04bc0b1fb605197207bf1d2f86 (image=quay.io/ceph/ceph:v18, name=zealous_hoover, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 06:19:07 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e23 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 06:19:07 compute-0 ceph-mon[74654]: pgmap v81: 38 pgs: 1 unknown, 37 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 06:19:07 compute-0 ceph-mon[74654]: 2.5 scrub starts
Nov 29 06:19:07 compute-0 ceph-mon[74654]: 2.5 scrub ok
Nov 29 06:19:07 compute-0 ceph-mon[74654]: Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 29 06:19:07 compute-0 ceph-mon[74654]: from='client.? 192.168.122.100:0/2774593808' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Nov 29 06:19:07 compute-0 ceph-mon[74654]: osdmap e23: 2 total, 2 up, 2 in
Nov 29 06:19:07 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} v 0) v1
Nov 29 06:19:07 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3785446785' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Nov 29 06:19:07 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v83: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 06:19:07 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 29 06:19:07 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 06:19:08 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e23 do_prune osdmap full prune enabled
Nov 29 06:19:08 compute-0 ceph-mon[74654]: 2.6 deep-scrub starts
Nov 29 06:19:08 compute-0 ceph-mon[74654]: 2.6 deep-scrub ok
Nov 29 06:19:08 compute-0 ceph-mon[74654]: from='client.? 192.168.122.100:0/3785446785' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Nov 29 06:19:08 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 06:19:08 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3785446785' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Nov 29 06:19:08 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 06:19:08 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e24 e24: 2 total, 2 up, 2 in
Nov 29 06:19:08 compute-0 zealous_hoover[88252]: enabled application 'rbd' on pool 'volumes'
Nov 29 06:19:08 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e24: 2 total, 2 up, 2 in
Nov 29 06:19:08 compute-0 systemd[1]: libpod-68a3fb6205ea7076db3c8141e87379055be64c04bc0b1fb605197207bf1d2f86.scope: Deactivated successfully.
Nov 29 06:19:08 compute-0 podman[88277]: 2025-11-29 06:19:08.546076923 +0000 UTC m=+0.026441303 container died 68a3fb6205ea7076db3c8141e87379055be64c04bc0b1fb605197207bf1d2f86 (image=quay.io/ceph/ceph:v18, name=zealous_hoover, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 29 06:19:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-e12691d85c0ff373de9fe75d7c93f547175ca6c43b8cde3faa8eb20be74953ca-merged.mount: Deactivated successfully.
Nov 29 06:19:09 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 24 pg[2.e( empty local-lis/les=0/0 n=0 ec=17/12 lis/c=17/17 les/c/f=19/19/0 sis=24) [1] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:19:09 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 24 pg[2.a( empty local-lis/les=0/0 n=0 ec=17/12 lis/c=17/17 les/c/f=19/19/0 sis=24) [1] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:19:09 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 24 pg[2.d( empty local-lis/les=0/0 n=0 ec=17/12 lis/c=17/17 les/c/f=19/19/0 sis=24) [1] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:19:09 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 24 pg[2.1e( empty local-lis/les=0/0 n=0 ec=17/12 lis/c=17/17 les/c/f=19/19/0 sis=24) [1] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:19:09 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 24 pg[2.c( empty local-lis/les=0/0 n=0 ec=17/12 lis/c=17/17 les/c/f=19/19/0 sis=24) [1] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:19:09 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 24 pg[2.4( empty local-lis/les=0/0 n=0 ec=17/12 lis/c=17/17 les/c/f=19/19/0 sis=24) [1] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:19:09 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 24 pg[2.6( empty local-lis/les=0/0 n=0 ec=17/12 lis/c=17/17 les/c/f=19/19/0 sis=24) [1] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:19:09 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 24 pg[2.1( empty local-lis/les=0/0 n=0 ec=17/12 lis/c=17/17 les/c/f=19/19/0 sis=24) [1] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:19:09 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 24 pg[2.1f( empty local-lis/les=0/0 n=0 ec=17/12 lis/c=17/17 les/c/f=19/19/0 sis=24) [1] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:19:09 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 24 pg[2.10( empty local-lis/les=0/0 n=0 ec=17/12 lis/c=17/17 les/c/f=19/19/0 sis=24) [1] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:19:09 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 24 pg[2.13( empty local-lis/les=0/0 n=0 ec=17/12 lis/c=17/17 les/c/f=19/19/0 sis=24) [1] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:19:09 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 24 pg[2.15( empty local-lis/les=0/0 n=0 ec=17/12 lis/c=17/17 les/c/f=19/19/0 sis=24) [1] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:19:09 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 24 pg[2.9( empty local-lis/les=0/0 n=0 ec=17/12 lis/c=17/17 les/c/f=19/19/0 sis=24) [1] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:19:09 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 24 pg[2.1b( empty local-lis/les=0/0 n=0 ec=17/12 lis/c=17/17 les/c/f=19/19/0 sis=24) [1] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:19:09 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 24 pg[2.19( empty local-lis/les=0/0 n=0 ec=17/12 lis/c=17/17 les/c/f=19/19/0 sis=24) [1] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:19:09 compute-0 podman[88277]: 2025-11-29 06:19:09.362468694 +0000 UTC m=+0.842833044 container remove 68a3fb6205ea7076db3c8141e87379055be64c04bc0b1fb605197207bf1d2f86 (image=quay.io/ceph/ceph:v18, name=zealous_hoover, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 29 06:19:09 compute-0 systemd[1]: libpod-conmon-68a3fb6205ea7076db3c8141e87379055be64c04bc0b1fb605197207bf1d2f86.scope: Deactivated successfully.
Nov 29 06:19:09 compute-0 sudo[88234]: pam_unix(sudo:session): session closed for user root
Nov 29 06:19:09 compute-0 sudo[88315]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ugboorsgazayhwrsbaydvykuwxotpznc ; /usr/bin/python3'
Nov 29 06:19:09 compute-0 sudo[88315]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:19:09 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e24 do_prune osdmap full prune enabled
Nov 29 06:19:09 compute-0 ceph-mon[74654]: pgmap v83: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 06:19:09 compute-0 ceph-mon[74654]: 2.7 scrub starts
Nov 29 06:19:09 compute-0 ceph-mon[74654]: 2.7 scrub ok
Nov 29 06:19:09 compute-0 ceph-mon[74654]: from='client.? 192.168.122.100:0/3785446785' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Nov 29 06:19:09 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 06:19:09 compute-0 ceph-mon[74654]: osdmap e24: 2 total, 2 up, 2 in
Nov 29 06:19:09 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e25 e25: 2 total, 2 up, 2 in
Nov 29 06:19:09 compute-0 python3[88317]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable backups rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:19:09 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e25: 2 total, 2 up, 2 in
Nov 29 06:19:09 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 25 pg[2.a( empty local-lis/les=24/25 n=0 ec=17/12 lis/c=17/17 les/c/f=19/19/0 sis=24) [1] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:19:09 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 25 pg[2.13( empty local-lis/les=24/25 n=0 ec=17/12 lis/c=17/17 les/c/f=19/19/0 sis=24) [1] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:19:09 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 25 pg[2.15( empty local-lis/les=24/25 n=0 ec=17/12 lis/c=17/17 les/c/f=19/19/0 sis=24) [1] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:19:09 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 25 pg[2.1b( empty local-lis/les=24/25 n=0 ec=17/12 lis/c=17/17 les/c/f=19/19/0 sis=24) [1] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:19:09 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 25 pg[2.9( empty local-lis/les=24/25 n=0 ec=17/12 lis/c=17/17 les/c/f=19/19/0 sis=24) [1] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:19:09 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 25 pg[2.19( empty local-lis/les=24/25 n=0 ec=17/12 lis/c=17/17 les/c/f=19/19/0 sis=24) [1] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:19:09 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 25 pg[2.10( empty local-lis/les=24/25 n=0 ec=17/12 lis/c=17/17 les/c/f=19/19/0 sis=24) [1] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:19:09 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 25 pg[2.1f( empty local-lis/les=24/25 n=0 ec=17/12 lis/c=17/17 les/c/f=19/19/0 sis=24) [1] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:19:09 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 25 pg[2.6( empty local-lis/les=24/25 n=0 ec=17/12 lis/c=17/17 les/c/f=19/19/0 sis=24) [1] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:19:09 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 25 pg[2.4( empty local-lis/les=24/25 n=0 ec=17/12 lis/c=17/17 les/c/f=19/19/0 sis=24) [1] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:19:09 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 25 pg[2.c( empty local-lis/les=24/25 n=0 ec=17/12 lis/c=17/17 les/c/f=19/19/0 sis=24) [1] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:19:09 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 25 pg[2.1e( empty local-lis/les=24/25 n=0 ec=17/12 lis/c=17/17 les/c/f=19/19/0 sis=24) [1] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:19:09 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 25 pg[2.d( empty local-lis/les=24/25 n=0 ec=17/12 lis/c=17/17 les/c/f=19/19/0 sis=24) [1] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:19:09 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 25 pg[2.e( empty local-lis/les=24/25 n=0 ec=17/12 lis/c=17/17 les/c/f=19/19/0 sis=24) [1] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:19:09 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 25 pg[2.1( empty local-lis/les=24/25 n=0 ec=17/12 lis/c=17/17 les/c/f=19/19/0 sis=24) [1] r=0 lpr=24 pi=[17,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:19:09 compute-0 podman[88318]: 2025-11-29 06:19:09.854300103 +0000 UTC m=+0.046004820 container create ae6afcb861c1ff97ce275ae7c81ca27259c0602005cf32be66b560db576066ca (image=quay.io/ceph/ceph:v18, name=determined_lichterman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 29 06:19:09 compute-0 systemd[1]: Started libpod-conmon-ae6afcb861c1ff97ce275ae7c81ca27259c0602005cf32be66b560db576066ca.scope.
Nov 29 06:19:09 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:19:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86d484c8adf4d3bce5a2abee4d2305e0f379aba40c646e96fd9a983e5ba849ec/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:19:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86d484c8adf4d3bce5a2abee4d2305e0f379aba40c646e96fd9a983e5ba849ec/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:19:09 compute-0 podman[88318]: 2025-11-29 06:19:09.835836297 +0000 UTC m=+0.027541054 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 06:19:09 compute-0 podman[88318]: 2025-11-29 06:19:09.935445221 +0000 UTC m=+0.127149958 container init ae6afcb861c1ff97ce275ae7c81ca27259c0602005cf32be66b560db576066ca (image=quay.io/ceph/ceph:v18, name=determined_lichterman, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 06:19:09 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v86: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 06:19:09 compute-0 podman[88318]: 2025-11-29 06:19:09.944429426 +0000 UTC m=+0.136134153 container start ae6afcb861c1ff97ce275ae7c81ca27259c0602005cf32be66b560db576066ca (image=quay.io/ceph/ceph:v18, name=determined_lichterman, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 29 06:19:09 compute-0 podman[88318]: 2025-11-29 06:19:09.948639316 +0000 UTC m=+0.140344053 container attach ae6afcb861c1ff97ce275ae7c81ca27259c0602005cf32be66b560db576066ca (image=quay.io/ceph/ceph:v18, name=determined_lichterman, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 06:19:10 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} v 0) v1
Nov 29 06:19:10 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3924631149' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Nov 29 06:19:10 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e25 do_prune osdmap full prune enabled
Nov 29 06:19:11 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3924631149' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Nov 29 06:19:11 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e26 e26: 2 total, 2 up, 2 in
Nov 29 06:19:11 compute-0 determined_lichterman[88334]: enabled application 'rbd' on pool 'backups'
Nov 29 06:19:11 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e26: 2 total, 2 up, 2 in
Nov 29 06:19:11 compute-0 systemd[1]: libpod-ae6afcb861c1ff97ce275ae7c81ca27259c0602005cf32be66b560db576066ca.scope: Deactivated successfully.
Nov 29 06:19:11 compute-0 podman[88318]: 2025-11-29 06:19:11.077612618 +0000 UTC m=+1.269317375 container died ae6afcb861c1ff97ce275ae7c81ca27259c0602005cf32be66b560db576066ca (image=quay.io/ceph/ceph:v18, name=determined_lichterman, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 29 06:19:11 compute-0 ceph-mon[74654]: osdmap e25: 2 total, 2 up, 2 in
Nov 29 06:19:11 compute-0 ceph-mon[74654]: pgmap v86: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 06:19:11 compute-0 ceph-mon[74654]: from='client.? 192.168.122.100:0/3924631149' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Nov 29 06:19:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-86d484c8adf4d3bce5a2abee4d2305e0f379aba40c646e96fd9a983e5ba849ec-merged.mount: Deactivated successfully.
Nov 29 06:19:11 compute-0 podman[88318]: 2025-11-29 06:19:11.401462569 +0000 UTC m=+1.593167296 container remove ae6afcb861c1ff97ce275ae7c81ca27259c0602005cf32be66b560db576066ca (image=quay.io/ceph/ceph:v18, name=determined_lichterman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Nov 29 06:19:11 compute-0 systemd[1]: libpod-conmon-ae6afcb861c1ff97ce275ae7c81ca27259c0602005cf32be66b560db576066ca.scope: Deactivated successfully.
Nov 29 06:19:11 compute-0 sudo[88315]: pam_unix(sudo:session): session closed for user root
Nov 29 06:19:11 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 2.e scrub starts
Nov 29 06:19:11 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 2.e scrub ok
Nov 29 06:19:11 compute-0 sudo[88400]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ksskuykvitenociuozuczhniiywelbly ; /usr/bin/python3'
Nov 29 06:19:11 compute-0 sudo[88400]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:19:11 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 06:19:11 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:19:11 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 06:19:11 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:19:11 compute-0 sshd-session[88358]: Received disconnect from 31.6.212.12 port 39376:11: Bye Bye [preauth]
Nov 29 06:19:11 compute-0 sshd-session[88358]: Disconnected from authenticating user root 31.6.212.12 port 39376 [preauth]
Nov 29 06:19:11 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 06:19:11 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:19:11 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 06:19:11 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:19:11 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Nov 29 06:19:11 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 29 06:19:11 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 06:19:11 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:19:11 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 06:19:11 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 06:19:11 compute-0 ceph-mgr[74948]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Nov 29 06:19:11 compute-0 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Nov 29 06:19:11 compute-0 python3[88402]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable images rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:19:11 compute-0 podman[88403]: 2025-11-29 06:19:11.861453993 +0000 UTC m=+0.061170171 container create 6090c0f042e3f47c4e9108b0ad3a459a59ec1d330ec88aeab783a543c01fe0f4 (image=quay.io/ceph/ceph:v18, name=quizzical_wozniak, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 29 06:19:11 compute-0 systemd[1]: Started libpod-conmon-6090c0f042e3f47c4e9108b0ad3a459a59ec1d330ec88aeab783a543c01fe0f4.scope.
Nov 29 06:19:11 compute-0 podman[88403]: 2025-11-29 06:19:11.833340203 +0000 UTC m=+0.033056471 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 06:19:11 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:19:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4fb304b2e443727673bd3f3fa99c11519ed012ef4ba39a8875b9dce44e9f42f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:19:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4fb304b2e443727673bd3f3fa99c11519ed012ef4ba39a8875b9dce44e9f42f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:19:11 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v88: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 06:19:11 compute-0 podman[88403]: 2025-11-29 06:19:11.943726103 +0000 UTC m=+0.143442331 container init 6090c0f042e3f47c4e9108b0ad3a459a59ec1d330ec88aeab783a543c01fe0f4 (image=quay.io/ceph/ceph:v18, name=quizzical_wozniak, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 29 06:19:11 compute-0 podman[88403]: 2025-11-29 06:19:11.953691866 +0000 UTC m=+0.153408054 container start 6090c0f042e3f47c4e9108b0ad3a459a59ec1d330ec88aeab783a543c01fe0f4 (image=quay.io/ceph/ceph:v18, name=quizzical_wozniak, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 29 06:19:11 compute-0 podman[88403]: 2025-11-29 06:19:11.958118892 +0000 UTC m=+0.157835090 container attach 6090c0f042e3f47c4e9108b0ad3a459a59ec1d330ec88aeab783a543c01fe0f4 (image=quay.io/ceph/ceph:v18, name=quizzical_wozniak, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 06:19:12 compute-0 sshd-session[88361]: Invalid user tester from 104.208.108.166 port 27322
Nov 29 06:19:12 compute-0 ceph-mon[74654]: from='client.? 192.168.122.100:0/3924631149' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Nov 29 06:19:12 compute-0 ceph-mon[74654]: osdmap e26: 2 total, 2 up, 2 in
Nov 29 06:19:12 compute-0 ceph-mon[74654]: 2.8 scrub starts
Nov 29 06:19:12 compute-0 ceph-mon[74654]: 2.8 scrub ok
Nov 29 06:19:12 compute-0 ceph-mon[74654]: 2.e scrub starts
Nov 29 06:19:12 compute-0 ceph-mon[74654]: 2.e scrub ok
Nov 29 06:19:12 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:19:12 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:19:12 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:19:12 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:19:12 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 29 06:19:12 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:19:12 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 06:19:12 compute-0 ceph-mon[74654]: log_channel(cluster) log [WRN] : Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 29 06:19:12 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e26 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 06:19:12 compute-0 sshd-session[88361]: Received disconnect from 104.208.108.166 port 27322:11: Bye Bye [preauth]
Nov 29 06:19:12 compute-0 sshd-session[88361]: Disconnected from invalid user tester 104.208.108.166 port 27322 [preauth]
Nov 29 06:19:12 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} v 0) v1
Nov 29 06:19:12 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/935132046' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Nov 29 06:19:12 compute-0 ceph-mgr[74948]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/config/ceph.conf
Nov 29 06:19:12 compute-0 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/config/ceph.conf
Nov 29 06:19:13 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e26 do_prune osdmap full prune enabled
Nov 29 06:19:13 compute-0 ceph-mon[74654]: Updating compute-2:/etc/ceph/ceph.conf
Nov 29 06:19:13 compute-0 ceph-mon[74654]: pgmap v88: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 06:19:13 compute-0 ceph-mon[74654]: 2.b scrub starts
Nov 29 06:19:13 compute-0 ceph-mon[74654]: 2.b scrub ok
Nov 29 06:19:13 compute-0 ceph-mon[74654]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 29 06:19:13 compute-0 ceph-mon[74654]: from='client.? 192.168.122.100:0/935132046' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Nov 29 06:19:13 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/935132046' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Nov 29 06:19:13 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e27 e27: 2 total, 2 up, 2 in
Nov 29 06:19:13 compute-0 quizzical_wozniak[88419]: enabled application 'rbd' on pool 'images'
Nov 29 06:19:13 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e27: 2 total, 2 up, 2 in
Nov 29 06:19:13 compute-0 systemd[1]: libpod-6090c0f042e3f47c4e9108b0ad3a459a59ec1d330ec88aeab783a543c01fe0f4.scope: Deactivated successfully.
Nov 29 06:19:13 compute-0 podman[88444]: 2025-11-29 06:19:13.423775789 +0000 UTC m=+0.043966022 container died 6090c0f042e3f47c4e9108b0ad3a459a59ec1d330ec88aeab783a543c01fe0f4 (image=quay.io/ceph/ceph:v18, name=quizzical_wozniak, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 06:19:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-e4fb304b2e443727673bd3f3fa99c11519ed012ef4ba39a8875b9dce44e9f42f-merged.mount: Deactivated successfully.
Nov 29 06:19:13 compute-0 podman[88444]: 2025-11-29 06:19:13.486608996 +0000 UTC m=+0.106799189 container remove 6090c0f042e3f47c4e9108b0ad3a459a59ec1d330ec88aeab783a543c01fe0f4 (image=quay.io/ceph/ceph:v18, name=quizzical_wozniak, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 06:19:13 compute-0 systemd[1]: libpod-conmon-6090c0f042e3f47c4e9108b0ad3a459a59ec1d330ec88aeab783a543c01fe0f4.scope: Deactivated successfully.
Nov 29 06:19:13 compute-0 sudo[88400]: pam_unix(sudo:session): session closed for user root
Nov 29 06:19:13 compute-0 ceph-mgr[74948]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Nov 29 06:19:13 compute-0 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Nov 29 06:19:13 compute-0 sudo[88484]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qsnqrjwijijmzdpdyhyfgazrsaesuqon ; /usr/bin/python3'
Nov 29 06:19:13 compute-0 sudo[88484]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:19:13 compute-0 python3[88486]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.meta cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:19:13 compute-0 podman[88489]: 2025-11-29 06:19:13.929210445 +0000 UTC m=+0.058889726 container create d887a97bb96e42aa2997e4409be0243bcb755d1d4403e5393bdf8124aae922e0 (image=quay.io/ceph/ceph:v18, name=kind_shockley, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 06:19:13 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v90: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 06:19:13 compute-0 systemd[1]: Started libpod-conmon-d887a97bb96e42aa2997e4409be0243bcb755d1d4403e5393bdf8124aae922e0.scope.
Nov 29 06:19:13 compute-0 podman[88489]: 2025-11-29 06:19:13.89879055 +0000 UTC m=+0.028469911 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 06:19:13 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:19:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/713c9795afd2f90e39430c67bf64c1c06d5a1c77bf79cb0a5d343234de9f543c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:19:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/713c9795afd2f90e39430c67bf64c1c06d5a1c77bf79cb0a5d343234de9f543c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:19:14 compute-0 podman[88489]: 2025-11-29 06:19:14.141277147 +0000 UTC m=+0.270956548 container init d887a97bb96e42aa2997e4409be0243bcb755d1d4403e5393bdf8124aae922e0 (image=quay.io/ceph/ceph:v18, name=kind_shockley, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 06:19:14 compute-0 podman[88489]: 2025-11-29 06:19:14.148006878 +0000 UTC m=+0.277686199 container start d887a97bb96e42aa2997e4409be0243bcb755d1d4403e5393bdf8124aae922e0 (image=quay.io/ceph/ceph:v18, name=kind_shockley, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 29 06:19:14 compute-0 podman[88489]: 2025-11-29 06:19:14.154340219 +0000 UTC m=+0.284019590 container attach d887a97bb96e42aa2997e4409be0243bcb755d1d4403e5393bdf8124aae922e0 (image=quay.io/ceph/ceph:v18, name=kind_shockley, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default)
Nov 29 06:19:14 compute-0 sshd-session[88459]: Received disconnect from 80.94.93.119 port 62646:11:  [preauth]
Nov 29 06:19:14 compute-0 sshd-session[88459]: Disconnected from authenticating user root 80.94.93.119 port 62646 [preauth]
Nov 29 06:19:14 compute-0 ceph-mon[74654]: Updating compute-2:/var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/config/ceph.conf
Nov 29 06:19:14 compute-0 ceph-mon[74654]: from='client.? 192.168.122.100:0/935132046' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Nov 29 06:19:14 compute-0 ceph-mon[74654]: osdmap e27: 2 total, 2 up, 2 in
Nov 29 06:19:14 compute-0 sshd-session[88487]: Invalid user sammy from 79.116.35.29 port 50980
Nov 29 06:19:14 compute-0 sshd-session[88487]: Received disconnect from 79.116.35.29 port 50980:11: Bye Bye [preauth]
Nov 29 06:19:14 compute-0 sshd-session[88487]: Disconnected from invalid user sammy 79.116.35.29 port 50980 [preauth]
Nov 29 06:19:14 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} v 0) v1
Nov 29 06:19:14 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1714792720' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Nov 29 06:19:14 compute-0 ceph-mgr[74948]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/config/ceph.client.admin.keyring
Nov 29 06:19:14 compute-0 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/config/ceph.client.admin.keyring
Nov 29 06:19:15 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e27 do_prune osdmap full prune enabled
Nov 29 06:19:15 compute-0 ceph-mon[74654]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Nov 29 06:19:15 compute-0 ceph-mon[74654]: pgmap v90: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 06:19:15 compute-0 ceph-mon[74654]: from='client.? 192.168.122.100:0/1714792720' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Nov 29 06:19:15 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1714792720' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Nov 29 06:19:15 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e28 e28: 2 total, 2 up, 2 in
Nov 29 06:19:15 compute-0 kind_shockley[88504]: enabled application 'cephfs' on pool 'cephfs.cephfs.meta'
Nov 29 06:19:15 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e28: 2 total, 2 up, 2 in
Nov 29 06:19:15 compute-0 systemd[1]: libpod-d887a97bb96e42aa2997e4409be0243bcb755d1d4403e5393bdf8124aae922e0.scope: Deactivated successfully.
Nov 29 06:19:15 compute-0 podman[88489]: 2025-11-29 06:19:15.385459155 +0000 UTC m=+1.515138436 container died d887a97bb96e42aa2997e4409be0243bcb755d1d4403e5393bdf8124aae922e0 (image=quay.io/ceph/ceph:v18, name=kind_shockley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 29 06:19:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-713c9795afd2f90e39430c67bf64c1c06d5a1c77bf79cb0a5d343234de9f543c-merged.mount: Deactivated successfully.
Nov 29 06:19:15 compute-0 podman[88489]: 2025-11-29 06:19:15.661282139 +0000 UTC m=+1.790961410 container remove d887a97bb96e42aa2997e4409be0243bcb755d1d4403e5393bdf8124aae922e0 (image=quay.io/ceph/ceph:v18, name=kind_shockley, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 29 06:19:15 compute-0 systemd[1]: libpod-conmon-d887a97bb96e42aa2997e4409be0243bcb755d1d4403e5393bdf8124aae922e0.scope: Deactivated successfully.
Nov 29 06:19:15 compute-0 sudo[88484]: pam_unix(sudo:session): session closed for user root
Nov 29 06:19:15 compute-0 sshd-session[88528]: Received disconnect from 138.124.186.225 port 54874:11: Bye Bye [preauth]
Nov 29 06:19:15 compute-0 sshd-session[88528]: Disconnected from authenticating user root 138.124.186.225 port 54874 [preauth]
Nov 29 06:19:15 compute-0 sudo[88568]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-arjpufuyrijrmiudtscjhdniesvmtjrk ; /usr/bin/python3'
Nov 29 06:19:15 compute-0 sudo[88568]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:19:15 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 06:19:15 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v92: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 06:19:15 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:19:15 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 06:19:15 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:19:15 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v93: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 06:19:15 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 06:19:15 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:19:15 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v94: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 06:19:15 compute-0 ceph-mgr[74948]: [progress INFO root] update: starting ev 21227504-c921-488b-8a16-30b8106c28d2 (Updating mon deployment (+2 -> 3))
Nov 29 06:19:15 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
Nov 29 06:19:15 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 29 06:19:15 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) v1
Nov 29 06:19:15 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Nov 29 06:19:15 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 06:19:15 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:19:15 compute-0 ceph-mgr[74948]: [cephadm INFO cephadm.serve] Deploying daemon mon.compute-2 on compute-2
Nov 29 06:19:15 compute-0 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Deploying daemon mon.compute-2 on compute-2
Nov 29 06:19:15 compute-0 python3[88570]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.data cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:19:16 compute-0 podman[88571]: 2025-11-29 06:19:16.06986467 +0000 UTC m=+0.063947810 container create 6c1a77898f9c318198a4ed74b43e93c69d79e9232db0f09811275b4f5816722a (image=quay.io/ceph/ceph:v18, name=hopeful_lumiere, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 29 06:19:16 compute-0 systemd[1]: Started libpod-conmon-6c1a77898f9c318198a4ed74b43e93c69d79e9232db0f09811275b4f5816722a.scope.
Nov 29 06:19:16 compute-0 podman[88571]: 2025-11-29 06:19:16.03295365 +0000 UTC m=+0.027036820 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 06:19:16 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:19:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2926c245397d06e159a567dfd85ebdff5fce9974ee8c5c0665ba6cbbc461d113/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:19:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2926c245397d06e159a567dfd85ebdff5fce9974ee8c5c0665ba6cbbc461d113/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:19:16 compute-0 podman[88571]: 2025-11-29 06:19:16.397615272 +0000 UTC m=+0.391698432 container init 6c1a77898f9c318198a4ed74b43e93c69d79e9232db0f09811275b4f5816722a (image=quay.io/ceph/ceph:v18, name=hopeful_lumiere, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 06:19:16 compute-0 podman[88571]: 2025-11-29 06:19:16.40634438 +0000 UTC m=+0.400427520 container start 6c1a77898f9c318198a4ed74b43e93c69d79e9232db0f09811275b4f5816722a (image=quay.io/ceph/ceph:v18, name=hopeful_lumiere, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 06:19:16 compute-0 ceph-mon[74654]: Updating compute-2:/var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/config/ceph.client.admin.keyring
Nov 29 06:19:16 compute-0 ceph-mon[74654]: from='client.? 192.168.122.100:0/1714792720' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Nov 29 06:19:16 compute-0 ceph-mon[74654]: osdmap e28: 2 total, 2 up, 2 in
Nov 29 06:19:16 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:19:16 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:19:16 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:19:16 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 29 06:19:16 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Nov 29 06:19:16 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:19:16 compute-0 podman[88571]: 2025-11-29 06:19:16.420677848 +0000 UTC m=+0.414761008 container attach 6c1a77898f9c318198a4ed74b43e93c69d79e9232db0f09811275b4f5816722a (image=quay.io/ceph/ceph:v18, name=hopeful_lumiere, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 06:19:16 compute-0 ceph-mon[74654]: log_channel(cluster) log [INF] : Health check cleared: CEPHADM_APPLY_SPEC_FAIL (was: Failed to apply 2 service(s): mon,mgr)
Nov 29 06:19:16 compute-0 ceph-mon[74654]: log_channel(cluster) log [INF] : Health check cleared: CEPHADM_REFRESH_FAILED (was: failed to probe daemons or devices)
Nov 29 06:19:17 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} v 0) v1
Nov 29 06:19:17 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2338482810' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Nov 29 06:19:17 compute-0 ceph-mon[74654]: log_channel(cluster) log [WRN] : Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 29 06:19:17 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e28 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 06:19:17 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e28 do_prune osdmap full prune enabled
Nov 29 06:19:17 compute-0 ceph-mon[74654]: pgmap v92: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 06:19:17 compute-0 ceph-mon[74654]: pgmap v93: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 06:19:17 compute-0 ceph-mon[74654]: pgmap v94: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 06:19:17 compute-0 ceph-mon[74654]: Deploying daemon mon.compute-2 on compute-2
Nov 29 06:19:17 compute-0 ceph-mon[74654]: Health check cleared: CEPHADM_APPLY_SPEC_FAIL (was: Failed to apply 2 service(s): mon,mgr)
Nov 29 06:19:17 compute-0 ceph-mon[74654]: Health check cleared: CEPHADM_REFRESH_FAILED (was: failed to probe daemons or devices)
Nov 29 06:19:17 compute-0 ceph-mon[74654]: from='client.? 192.168.122.100:0/2338482810' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Nov 29 06:19:17 compute-0 ceph-mon[74654]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 29 06:19:17 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2338482810' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Nov 29 06:19:17 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e29 e29: 2 total, 2 up, 2 in
Nov 29 06:19:17 compute-0 hopeful_lumiere[88587]: enabled application 'cephfs' on pool 'cephfs.cephfs.data'
Nov 29 06:19:17 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e29: 2 total, 2 up, 2 in
Nov 29 06:19:17 compute-0 systemd[1]: libpod-6c1a77898f9c318198a4ed74b43e93c69d79e9232db0f09811275b4f5816722a.scope: Deactivated successfully.
Nov 29 06:19:17 compute-0 podman[88571]: 2025-11-29 06:19:17.453216516 +0000 UTC m=+1.447299656 container died 6c1a77898f9c318198a4ed74b43e93c69d79e9232db0f09811275b4f5816722a (image=quay.io/ceph/ceph:v18, name=hopeful_lumiere, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 06:19:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-2926c245397d06e159a567dfd85ebdff5fce9974ee8c5c0665ba6cbbc461d113-merged.mount: Deactivated successfully.
Nov 29 06:19:17 compute-0 podman[88571]: 2025-11-29 06:19:17.491131814 +0000 UTC m=+1.485214954 container remove 6c1a77898f9c318198a4ed74b43e93c69d79e9232db0f09811275b4f5816722a (image=quay.io/ceph/ceph:v18, name=hopeful_lumiere, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 29 06:19:17 compute-0 sudo[88568]: pam_unix(sudo:session): session closed for user root
Nov 29 06:19:17 compute-0 systemd[1]: libpod-conmon-6c1a77898f9c318198a4ed74b43e93c69d79e9232db0f09811275b4f5816722a.scope: Deactivated successfully.
Nov 29 06:19:17 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 2.a scrub starts
Nov 29 06:19:17 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 2.a scrub ok
Nov 29 06:19:17 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v96: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 06:19:18 compute-0 ceph-mon[74654]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Nov 29 06:19:18 compute-0 ceph-mon[74654]: log_channel(cluster) log [INF] : Cluster is now healthy
Nov 29 06:19:18 compute-0 python3[88697]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_rgw.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 06:19:18 compute-0 ceph-mon[74654]: from='client.? 192.168.122.100:0/2338482810' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Nov 29 06:19:18 compute-0 ceph-mon[74654]: osdmap e29: 2 total, 2 up, 2 in
Nov 29 06:19:18 compute-0 ceph-mon[74654]: 2.a scrub starts
Nov 29 06:19:18 compute-0 ceph-mon[74654]: 2.a scrub ok
Nov 29 06:19:18 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 06:19:18 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:19:18 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 06:19:18 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:19:18 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1  adding peer [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to list of hints
Nov 29 06:19:18 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1  adding peer [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to list of hints
Nov 29 06:19:18 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Nov 29 06:19:18 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:19:18 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
Nov 29 06:19:18 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 29 06:19:18 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) v1
Nov 29 06:19:18 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Nov 29 06:19:18 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 06:19:18 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:19:18 compute-0 ceph-mgr[74948]: [cephadm INFO cephadm.serve] Deploying daemon mon.compute-1 on compute-1
Nov 29 06:19:18 compute-0 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Deploying daemon mon.compute-1 on compute-1
Nov 29 06:19:18 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1  adding peer [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to list of hints
Nov 29 06:19:18 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).monmap v1 adding/updating compute-2 at [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to monitor cluster
Nov 29 06:19:18 compute-0 ceph-mgr[74948]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/1332418664; not ready for session (expect reconnect)
Nov 29 06:19:18 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Nov 29 06:19:18 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 29 06:19:18 compute-0 ceph-mgr[74948]: mgr finish mon failed to return metadata for mon.compute-2: (2) No such file or directory
Nov 29 06:19:18 compute-0 ceph-mon[74654]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Nov 29 06:19:18 compute-0 ceph-mon[74654]: paxos.0).electionLogic(5) init, last seen epoch 5, mid-election, bumping
Nov 29 06:19:18 compute-0 ceph-mon[74654]: mon.compute-0@0(electing) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 06:19:18 compute-0 ceph-mon[74654]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Nov 29 06:19:18 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 29 06:19:18 compute-0 ceph-mon[74654]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Nov 29 06:19:18 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 29 06:19:18 compute-0 ceph-mgr[74948]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Nov 29 06:19:18 compute-0 python3[88768]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764397158.1797266-37397-196526841392482/source dest=/tmp/ceph_rgw.yml mode=0644 force=True follow=False _original_basename=ceph_rgw.yml.j2 checksum=ad866aa1f51f395809dd7ac5cb7a56d43c167b49 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:19:19 compute-0 sudo[88868]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-stiwlqrtvbfrgxtcurngcgfhfccohnqs ; /usr/bin/python3'
Nov 29 06:19:19 compute-0 sudo[88868]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:19:19 compute-0 python3[88870]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 06:19:19 compute-0 sudo[88868]: pam_unix(sudo:session): session closed for user root
Nov 29 06:19:19 compute-0 ceph-mgr[74948]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/1332418664; not ready for session (expect reconnect)
Nov 29 06:19:19 compute-0 ceph-mon[74654]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Nov 29 06:19:19 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 29 06:19:19 compute-0 ceph-mgr[74948]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Nov 29 06:19:19 compute-0 sudo[88943]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-etegwbccfcejyghlozzjgaerfiwgajjd ; /usr/bin/python3'
Nov 29 06:19:19 compute-0 sudo[88943]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:19:19 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v97: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 06:19:20 compute-0 python3[88945]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764397159.170986-37411-170038243516047/source dest=/home/ceph-admin/assimilate_ceph.conf owner=167 group=167 mode=0644 follow=False _original_basename=ceph_rgw.conf.j2 checksum=b7a9aa9ffd1d96f069d7e387f055c8a3b711590d backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:19:20 compute-0 sudo[88943]: pam_unix(sudo:session): session closed for user root
Nov 29 06:19:20 compute-0 sudo[88993]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-asyjrabfngcjexwiyalrehdsksubrfvo ; /usr/bin/python3'
Nov 29 06:19:20 compute-0 sudo[88993]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:19:20 compute-0 python3[88995]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config assimilate-conf -i /home/assimilate_ceph.conf
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:19:20 compute-0 podman[88996]: 2025-11-29 06:19:20.530697307 +0000 UTC m=+0.078949356 container create a461efb4ddd88a08256951861d468e7753cd62e39745b1e72105862d5c16358d (image=quay.io/ceph/ceph:v18, name=romantic_almeida, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Nov 29 06:19:20 compute-0 systemd[1]: Started libpod-conmon-a461efb4ddd88a08256951861d468e7753cd62e39745b1e72105862d5c16358d.scope.
Nov 29 06:19:20 compute-0 podman[88996]: 2025-11-29 06:19:20.496504625 +0000 UTC m=+0.044756734 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 06:19:20 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:19:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a40472dd652af73a2cd61771997ea7ac531ac396e584ffca3a5b0c20f016f6b/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 29 06:19:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a40472dd652af73a2cd61771997ea7ac531ac396e584ffca3a5b0c20f016f6b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:19:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a40472dd652af73a2cd61771997ea7ac531ac396e584ffca3a5b0c20f016f6b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:19:20 compute-0 podman[88996]: 2025-11-29 06:19:20.627347616 +0000 UTC m=+0.175599725 container init a461efb4ddd88a08256951861d468e7753cd62e39745b1e72105862d5c16358d (image=quay.io/ceph/ceph:v18, name=romantic_almeida, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 06:19:20 compute-0 podman[88996]: 2025-11-29 06:19:20.63803416 +0000 UTC m=+0.186286169 container start a461efb4ddd88a08256951861d468e7753cd62e39745b1e72105862d5c16358d (image=quay.io/ceph/ceph:v18, name=romantic_almeida, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3)
Nov 29 06:19:20 compute-0 podman[88996]: 2025-11-29 06:19:20.641596352 +0000 UTC m=+0.189848391 container attach a461efb4ddd88a08256951861d468e7753cd62e39745b1e72105862d5c16358d (image=quay.io/ceph/ceph:v18, name=romantic_almeida, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef)
Nov 29 06:19:20 compute-0 ceph-mgr[74948]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/1332418664; not ready for session (expect reconnect)
Nov 29 06:19:20 compute-0 ceph-mon[74654]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Nov 29 06:19:20 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 29 06:19:20 compute-0 ceph-mgr[74948]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Nov 29 06:19:20 compute-0 ceph-mon[74654]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Nov 29 06:19:21 compute-0 ceph-mon[74654]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Nov 29 06:19:21 compute-0 ceph-mon[74654]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Nov 29 06:19:21 compute-0 ceph-mgr[74948]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/1332418664; not ready for session (expect reconnect)
Nov 29 06:19:21 compute-0 ceph-mon[74654]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Nov 29 06:19:21 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 29 06:19:21 compute-0 ceph-mgr[74948]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Nov 29 06:19:21 compute-0 ceph-mon[74654]: mon.compute-0@0(electing) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 06:19:21 compute-0 ceph-mon[74654]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Nov 29 06:19:21 compute-0 ceph-mon[74654]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Nov 29 06:19:21 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v98: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 06:19:22 compute-0 ceph-mon[74654]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Nov 29 06:19:22 compute-0 ceph-mgr[74948]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/2968721431; not ready for session (expect reconnect)
Nov 29 06:19:22 compute-0 ceph-mon[74654]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Nov 29 06:19:22 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 29 06:19:22 compute-0 ceph-mgr[74948]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Nov 29 06:19:22 compute-0 ceph-mon[74654]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Nov 29 06:19:22 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 2.c scrub starts
Nov 29 06:19:22 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 2.c scrub ok
Nov 29 06:19:22 compute-0 ceph-mgr[74948]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/1332418664; not ready for session (expect reconnect)
Nov 29 06:19:22 compute-0 ceph-mon[74654]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Nov 29 06:19:22 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 29 06:19:22 compute-0 ceph-mgr[74948]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Nov 29 06:19:23 compute-0 ceph-mgr[74948]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/2968721431; not ready for session (expect reconnect)
Nov 29 06:19:23 compute-0 ceph-mon[74654]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Nov 29 06:19:23 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 29 06:19:23 compute-0 ceph-mgr[74948]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Nov 29 06:19:23 compute-0 ceph-mgr[74948]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/1332418664; not ready for session (expect reconnect)
Nov 29 06:19:23 compute-0 ceph-mon[74654]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Nov 29 06:19:23 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 29 06:19:23 compute-0 ceph-mgr[74948]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Nov 29 06:19:23 compute-0 ceph-mon[74654]: paxos.0).electionLogic(7) init, last seen epoch 7, mid-election, bumping
Nov 29 06:19:23 compute-0 ceph-mon[74654]: mon.compute-0@0(electing) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 06:19:23 compute-0 ceph-mon[74654]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Nov 29 06:19:23 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : monmap e2: 2 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Nov 29 06:19:23 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 06:19:23 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : fsmap 
Nov 29 06:19:23 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e29: 2 total, 2 up, 2 in
Nov 29 06:19:23 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : mgrmap e8: compute-0.vxabpq(active, since 2m)
Nov 29 06:19:23 compute-0 ceph-mon[74654]: log_channel(cluster) log [INF] : overall HEALTH_OK
Nov 29 06:19:23 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:19:23 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 06:19:23 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:19:23 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v99: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 06:19:24 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Nov 29 06:19:24 compute-0 ceph-mon[74654]: 2.f deep-scrub starts
Nov 29 06:19:24 compute-0 ceph-mon[74654]: 2.f deep-scrub ok
Nov 29 06:19:24 compute-0 ceph-mon[74654]: Deploying daemon mon.compute-1 on compute-1
Nov 29 06:19:24 compute-0 ceph-mon[74654]: mon.compute-0 calling monitor election
Nov 29 06:19:24 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 29 06:19:24 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 29 06:19:24 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 29 06:19:24 compute-0 ceph-mon[74654]: pgmap v97: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 06:19:24 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 29 06:19:24 compute-0 ceph-mon[74654]: mon.compute-2 calling monitor election
Nov 29 06:19:24 compute-0 ceph-mon[74654]: 2.11 scrub starts
Nov 29 06:19:24 compute-0 ceph-mon[74654]: 2.11 scrub ok
Nov 29 06:19:24 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 29 06:19:24 compute-0 ceph-mon[74654]: pgmap v98: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 06:19:24 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 29 06:19:24 compute-0 ceph-mon[74654]: 2.12 scrub starts
Nov 29 06:19:24 compute-0 ceph-mon[74654]: 2.12 scrub ok
Nov 29 06:19:24 compute-0 ceph-mon[74654]: 2.c scrub starts
Nov 29 06:19:24 compute-0 ceph-mon[74654]: 2.c scrub ok
Nov 29 06:19:24 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 29 06:19:24 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 29 06:19:24 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 29 06:19:24 compute-0 ceph-mon[74654]: mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Nov 29 06:19:24 compute-0 ceph-mon[74654]: monmap e2: 2 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Nov 29 06:19:24 compute-0 ceph-mon[74654]: fsmap 
Nov 29 06:19:24 compute-0 ceph-mon[74654]: osdmap e29: 2 total, 2 up, 2 in
Nov 29 06:19:24 compute-0 ceph-mon[74654]: mgrmap e8: compute-0.vxabpq(active, since 2m)
Nov 29 06:19:24 compute-0 ceph-mon[74654]: overall HEALTH_OK
Nov 29 06:19:24 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:19:24 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:19:24 compute-0 ceph-mgr[74948]: [progress INFO root] complete: finished ev 21227504-c921-488b-8a16-30b8106c28d2 (Updating mon deployment (+2 -> 3))
Nov 29 06:19:24 compute-0 ceph-mgr[74948]: [progress INFO root] Completed event 21227504-c921-488b-8a16-30b8106c28d2 (Updating mon deployment (+2 -> 3)) in 8 seconds
Nov 29 06:19:24 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Nov 29 06:19:24 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:19:24 compute-0 ceph-mgr[74948]: [progress INFO root] update: starting ev 878a4358-7d35-4bea-97ea-6a2ffa9735e2 (Updating mgr deployment (+2 -> 3))
Nov 29 06:19:24 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-2.ngsyhe", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Nov 29 06:19:24 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.ngsyhe", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 29 06:19:24 compute-0 ceph-mgr[74948]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/2968721431; not ready for session (expect reconnect)
Nov 29 06:19:24 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Nov 29 06:19:24 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).monmap v2 adding/updating compute-1 at [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to monitor cluster
Nov 29 06:19:24 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Nov 29 06:19:24 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 29 06:19:24 compute-0 ceph-mgr[74948]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Nov 29 06:19:24 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.ngsyhe", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Nov 29 06:19:24 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 29 06:19:24 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 29 06:19:24 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 06:19:24 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:19:24 compute-0 ceph-mgr[74948]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-2.ngsyhe on compute-2
Nov 29 06:19:24 compute-0 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-2.ngsyhe on compute-2
Nov 29 06:19:24 compute-0 ceph-mon[74654]: mon.compute-0@0(probing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Nov 29 06:19:24 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 29 06:19:24 compute-0 ceph-mon[74654]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Nov 29 06:19:24 compute-0 ceph-mon[74654]: paxos.0).electionLogic(10) init, last seen epoch 10
Nov 29 06:19:24 compute-0 ceph-mon[74654]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 06:19:24 compute-0 ceph-mon[74654]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Nov 29 06:19:24 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 29 06:19:24 compute-0 ceph-mon[74654]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Nov 29 06:19:24 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 29 06:19:24 compute-0 ceph-mgr[74948]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Nov 29 06:19:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:19:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:19:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:19:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:19:24 compute-0 ceph-mon[74654]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Nov 29 06:19:24 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/501439537' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 29 06:19:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:19:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:19:24 compute-0 ceph-mgr[74948]: mgr.server handle_report got status from non-daemon mon.compute-2
Nov 29 06:19:24 compute-0 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: 2025-11-29T06:19:24.763+0000 7f90f1cf5640 -1 mgr.server handle_report got status from non-daemon mon.compute-2
Nov 29 06:19:24 compute-0 ceph-mgr[74948]: [progress INFO root] Writing back 4 completed events
Nov 29 06:19:24 compute-0 ceph-mon[74654]: mon.compute-0@0(electing) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 29 06:19:25 compute-0 ceph-mgr[74948]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/2968721431; not ready for session (expect reconnect)
Nov 29 06:19:25 compute-0 ceph-mon[74654]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Nov 29 06:19:25 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 29 06:19:25 compute-0 ceph-mgr[74948]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Nov 29 06:19:25 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v100: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 06:19:26 compute-0 ceph-mgr[74948]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/2968721431; not ready for session (expect reconnect)
Nov 29 06:19:26 compute-0 ceph-mon[74654]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Nov 29 06:19:26 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 29 06:19:26 compute-0 ceph-mgr[74948]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Nov 29 06:19:26 compute-0 ceph-mon[74654]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Nov 29 06:19:26 compute-0 ceph-mon[74654]: mon.compute-0@0(electing) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 06:19:26 compute-0 ceph-mon[74654]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Nov 29 06:19:26 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 2.d scrub starts
Nov 29 06:19:26 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 2.d scrub ok
Nov 29 06:19:26 compute-0 ceph-mon[74654]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Nov 29 06:19:27 compute-0 ceph-mgr[74948]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/2968721431; not ready for session (expect reconnect)
Nov 29 06:19:27 compute-0 ceph-mon[74654]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Nov 29 06:19:27 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 29 06:19:27 compute-0 ceph-mgr[74948]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Nov 29 06:19:27 compute-0 ceph-mon[74654]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Nov 29 06:19:27 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v101: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 06:19:28 compute-0 ceph-mgr[74948]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/2968721431; not ready for session (expect reconnect)
Nov 29 06:19:28 compute-0 ceph-mon[74654]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Nov 29 06:19:28 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 29 06:19:28 compute-0 ceph-mgr[74948]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Nov 29 06:19:29 compute-0 ceph-mgr[74948]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/2968721431; not ready for session (expect reconnect)
Nov 29 06:19:29 compute-0 ceph-mon[74654]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Nov 29 06:19:29 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 29 06:19:29 compute-0 ceph-mgr[74948]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Nov 29 06:19:29 compute-0 ceph-mon[74654]: paxos.0).electionLogic(11) init, last seen epoch 11, mid-election, bumping
Nov 29 06:19:29 compute-0 ceph-mon[74654]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 06:19:29 compute-0 ceph-mon[74654]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Nov 29 06:19:29 compute-0 ceph-mon[74654]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Nov 29 06:19:29 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Nov 29 06:19:29 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 06:19:29 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : fsmap 
Nov 29 06:19:29 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e29: 2 total, 2 up, 2 in
Nov 29 06:19:29 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : mgrmap e8: compute-0.vxabpq(active, since 2m)
Nov 29 06:19:29 compute-0 ceph-mon[74654]: log_channel(cluster) log [INF] : overall HEALTH_OK
Nov 29 06:19:29 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/501439537' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Nov 29 06:19:29 compute-0 romantic_almeida[89012]: 
Nov 29 06:19:29 compute-0 romantic_almeida[89012]: [global]
Nov 29 06:19:29 compute-0 romantic_almeida[89012]:         fsid = 336ec58c-893b-528f-a0c1-6ed1196bc047
Nov 29 06:19:29 compute-0 romantic_almeida[89012]:         mon_host = 192.168.122.100
Nov 29 06:19:29 compute-0 systemd[1]: libpod-a461efb4ddd88a08256951861d468e7753cd62e39745b1e72105862d5c16358d.scope: Deactivated successfully.
Nov 29 06:19:29 compute-0 podman[88996]: 2025-11-29 06:19:29.403711649 +0000 UTC m=+8.951963658 container died a461efb4ddd88a08256951861d468e7753cd62e39745b1e72105862d5c16358d (image=quay.io/ceph/ceph:v18, name=romantic_almeida, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 29 06:19:29 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:19:29 compute-0 ceph-mon[74654]: pgmap v99: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 06:19:29 compute-0 ceph-mon[74654]: Deploying daemon mgr.compute-2.ngsyhe on compute-2
Nov 29 06:19:29 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 29 06:19:29 compute-0 ceph-mon[74654]: mon.compute-0 calling monitor election
Nov 29 06:19:29 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 29 06:19:29 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 29 06:19:29 compute-0 ceph-mon[74654]: mon.compute-2 calling monitor election
Nov 29 06:19:29 compute-0 ceph-mon[74654]: from='client.? 192.168.122.100:0/501439537' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 29 06:19:29 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 29 06:19:29 compute-0 ceph-mon[74654]: 2.14 scrub starts
Nov 29 06:19:29 compute-0 ceph-mon[74654]: 2.14 scrub ok
Nov 29 06:19:29 compute-0 ceph-mon[74654]: pgmap v100: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 06:19:29 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 29 06:19:29 compute-0 ceph-mon[74654]: mon.compute-1 calling monitor election
Nov 29 06:19:29 compute-0 ceph-mon[74654]: 2.d scrub starts
Nov 29 06:19:29 compute-0 ceph-mon[74654]: 2.d scrub ok
Nov 29 06:19:29 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 29 06:19:29 compute-0 ceph-mon[74654]: 2.16 scrub starts
Nov 29 06:19:29 compute-0 ceph-mon[74654]: 2.16 scrub ok
Nov 29 06:19:29 compute-0 ceph-mon[74654]: pgmap v101: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 06:19:29 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 29 06:19:29 compute-0 ceph-mon[74654]: 2.17 scrub starts
Nov 29 06:19:29 compute-0 ceph-mon[74654]: 2.17 scrub ok
Nov 29 06:19:29 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 29 06:19:29 compute-0 ceph-mon[74654]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Nov 29 06:19:29 compute-0 ceph-mon[74654]: monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Nov 29 06:19:29 compute-0 ceph-mon[74654]: fsmap 
Nov 29 06:19:29 compute-0 ceph-mon[74654]: osdmap e29: 2 total, 2 up, 2 in
Nov 29 06:19:29 compute-0 ceph-mon[74654]: mgrmap e8: compute-0.vxabpq(active, since 2m)
Nov 29 06:19:29 compute-0 ceph-mon[74654]: overall HEALTH_OK
Nov 29 06:19:29 compute-0 ceph-mon[74654]: from='client.? 192.168.122.100:0/501439537' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Nov 29 06:19:29 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:19:29 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 06:19:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-1a40472dd652af73a2cd61771997ea7ac531ac396e584ffca3a5b0c20f016f6b-merged.mount: Deactivated successfully.
Nov 29 06:19:29 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:19:29 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 29 06:19:29 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:19:29 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-1.gaxpay", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Nov 29 06:19:29 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.gaxpay", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 29 06:19:29 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.gaxpay", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Nov 29 06:19:29 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 29 06:19:29 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 29 06:19:29 compute-0 podman[88996]: 2025-11-29 06:19:29.574017502 +0000 UTC m=+9.122269521 container remove a461efb4ddd88a08256951861d468e7753cd62e39745b1e72105862d5c16358d (image=quay.io/ceph/ceph:v18, name=romantic_almeida, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 06:19:29 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 06:19:29 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:19:29 compute-0 ceph-mgr[74948]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-1.gaxpay on compute-1
Nov 29 06:19:29 compute-0 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-1.gaxpay on compute-1
Nov 29 06:19:29 compute-0 systemd[1]: libpod-conmon-a461efb4ddd88a08256951861d468e7753cd62e39745b1e72105862d5c16358d.scope: Deactivated successfully.
Nov 29 06:19:29 compute-0 sudo[88993]: pam_unix(sudo:session): session closed for user root
Nov 29 06:19:29 compute-0 sudo[89074]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aaijonviwbbhwvsabnhrhknnpxviwhxw ; /usr/bin/python3'
Nov 29 06:19:29 compute-0 sudo[89074]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:19:29 compute-0 python3[89076]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config-key set ssl_option no_sslv2:sslv3:no_tlsv1:no_tlsv1_1
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:19:29 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v102: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 06:19:30 compute-0 podman[89077]: 2025-11-29 06:19:30.025144072 +0000 UTC m=+0.076453376 container create cf0b4f9501a2f6fdaeeead08e39f617d3b1d092a80ad607e8ee3d43fc5c56f59 (image=quay.io/ceph/ceph:v18, name=upbeat_solomon, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 29 06:19:30 compute-0 systemd[1]: Started libpod-conmon-cf0b4f9501a2f6fdaeeead08e39f617d3b1d092a80ad607e8ee3d43fc5c56f59.scope.
Nov 29 06:19:30 compute-0 podman[89077]: 2025-11-29 06:19:29.995140913 +0000 UTC m=+0.046450227 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 06:19:30 compute-0 ceph-mgr[74948]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/2968721431; not ready for session (expect reconnect)
Nov 29 06:19:30 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Nov 29 06:19:30 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 29 06:19:30 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:19:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e994cc574a0853b4352b9b649ad9dcfa236416a3318a01e3eb39fa5c20cf16d7/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:19:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e994cc574a0853b4352b9b649ad9dcfa236416a3318a01e3eb39fa5c20cf16d7/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:19:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e994cc574a0853b4352b9b649ad9dcfa236416a3318a01e3eb39fa5c20cf16d7/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 29 06:19:30 compute-0 podman[89077]: 2025-11-29 06:19:30.141365013 +0000 UTC m=+0.192674367 container init cf0b4f9501a2f6fdaeeead08e39f617d3b1d092a80ad607e8ee3d43fc5c56f59 (image=quay.io/ceph/ceph:v18, name=upbeat_solomon, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 06:19:30 compute-0 podman[89077]: 2025-11-29 06:19:30.154645557 +0000 UTC m=+0.205954861 container start cf0b4f9501a2f6fdaeeead08e39f617d3b1d092a80ad607e8ee3d43fc5c56f59 (image=quay.io/ceph/ceph:v18, name=upbeat_solomon, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 06:19:30 compute-0 podman[89077]: 2025-11-29 06:19:30.159728097 +0000 UTC m=+0.211037371 container attach cf0b4f9501a2f6fdaeeead08e39f617d3b1d092a80ad607e8ee3d43fc5c56f59 (image=quay.io/ceph/ceph:v18, name=upbeat_solomon, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 06:19:30 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:19:30 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:19:30 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:19:30 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:19:30 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.gaxpay", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 29 06:19:30 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.gaxpay", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Nov 29 06:19:30 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 29 06:19:30 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:19:30 compute-0 ceph-mon[74654]: Deploying daemon mgr.compute-1.gaxpay on compute-1
Nov 29 06:19:30 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 29 06:19:30 compute-0 ceph-mon[74654]: 2.18 scrub starts
Nov 29 06:19:30 compute-0 ceph-mon[74654]: 2.18 scrub ok
Nov 29 06:19:30 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=ssl_option}] v 0) v1
Nov 29 06:19:30 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2714267067' entity='client.admin' 
Nov 29 06:19:30 compute-0 upbeat_solomon[89093]: set ssl_option
Nov 29 06:19:30 compute-0 systemd[1]: libpod-cf0b4f9501a2f6fdaeeead08e39f617d3b1d092a80ad607e8ee3d43fc5c56f59.scope: Deactivated successfully.
Nov 29 06:19:30 compute-0 podman[89077]: 2025-11-29 06:19:30.876241007 +0000 UTC m=+0.927550311 container died cf0b4f9501a2f6fdaeeead08e39f617d3b1d092a80ad607e8ee3d43fc5c56f59 (image=quay.io/ceph/ceph:v18, name=upbeat_solomon, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 06:19:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-e994cc574a0853b4352b9b649ad9dcfa236416a3318a01e3eb39fa5c20cf16d7-merged.mount: Deactivated successfully.
Nov 29 06:19:30 compute-0 podman[89077]: 2025-11-29 06:19:30.939581722 +0000 UTC m=+0.990890996 container remove cf0b4f9501a2f6fdaeeead08e39f617d3b1d092a80ad607e8ee3d43fc5c56f59 (image=quay.io/ceph/ceph:v18, name=upbeat_solomon, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 06:19:30 compute-0 systemd[1]: libpod-conmon-cf0b4f9501a2f6fdaeeead08e39f617d3b1d092a80ad607e8ee3d43fc5c56f59.scope: Deactivated successfully.
Nov 29 06:19:30 compute-0 sudo[89074]: pam_unix(sudo:session): session closed for user root
Nov 29 06:19:31 compute-0 ceph-mgr[74948]: mgr.server handle_report got status from non-daemon mon.compute-1
Nov 29 06:19:31 compute-0 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: 2025-11-29T06:19:31.104+0000 7f90f1cf5640 -1 mgr.server handle_report got status from non-daemon mon.compute-1
Nov 29 06:19:31 compute-0 sudo[89155]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tzyfzitmnqjtkyihqxkfrvpydgjruyfg ; /usr/bin/python3'
Nov 29 06:19:31 compute-0 sudo[89155]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:19:31 compute-0 python3[89157]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:19:31 compute-0 podman[89158]: 2025-11-29 06:19:31.391777574 +0000 UTC m=+0.051468935 container create 47d05e73fbf94287d0a8caac7c0649921c2de6c772e7cb9800fff31c2d4d7387 (image=quay.io/ceph/ceph:v18, name=nifty_nobel, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 29 06:19:31 compute-0 systemd[1]: Started libpod-conmon-47d05e73fbf94287d0a8caac7c0649921c2de6c772e7cb9800fff31c2d4d7387.scope.
Nov 29 06:19:31 compute-0 podman[89158]: 2025-11-29 06:19:31.367450754 +0000 UTC m=+0.027142115 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 06:19:31 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:19:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5080b3df61e3e41960b15e862e8599ed1ecf13fe6e6b480b4ca4bc38e1bb7971/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:19:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5080b3df61e3e41960b15e862e8599ed1ecf13fe6e6b480b4ca4bc38e1bb7971/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:19:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5080b3df61e3e41960b15e862e8599ed1ecf13fe6e6b480b4ca4bc38e1bb7971/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 29 06:19:31 compute-0 podman[89158]: 2025-11-29 06:19:31.492982931 +0000 UTC m=+0.152674282 container init 47d05e73fbf94287d0a8caac7c0649921c2de6c772e7cb9800fff31c2d4d7387 (image=quay.io/ceph/ceph:v18, name=nifty_nobel, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 29 06:19:31 compute-0 podman[89158]: 2025-11-29 06:19:31.50137497 +0000 UTC m=+0.161066321 container start 47d05e73fbf94287d0a8caac7c0649921c2de6c772e7cb9800fff31c2d4d7387 (image=quay.io/ceph/ceph:v18, name=nifty_nobel, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 29 06:19:31 compute-0 podman[89158]: 2025-11-29 06:19:31.505251435 +0000 UTC m=+0.164942796 container attach 47d05e73fbf94287d0a8caac7c0649921c2de6c772e7cb9800fff31c2d4d7387 (image=quay.io/ceph/ceph:v18, name=nifty_nobel, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2)
Nov 29 06:19:31 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 2.1e scrub starts
Nov 29 06:19:31 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 2.1e scrub ok
Nov 29 06:19:31 compute-0 ceph-mon[74654]: pgmap v102: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 06:19:31 compute-0 ceph-mon[74654]: from='client.? 192.168.122.100:0/2714267067' entity='client.admin' 
Nov 29 06:19:31 compute-0 ceph-mon[74654]: 2.1a deep-scrub starts
Nov 29 06:19:31 compute-0 ceph-mon[74654]: 2.1a deep-scrub ok
Nov 29 06:19:31 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 06:19:31 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:19:31 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 06:19:31 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:19:31 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 29 06:19:31 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v103: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 06:19:32 compute-0 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.14256 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 06:19:32 compute-0 ceph-mgr[74948]: [cephadm INFO root] Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Nov 29 06:19:32 compute-0 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Nov 29 06:19:32 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Nov 29 06:19:32 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e29 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 06:19:32 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:19:32 compute-0 ceph-mgr[74948]: [progress INFO root] complete: finished ev 878a4358-7d35-4bea-97ea-6a2ffa9735e2 (Updating mgr deployment (+2 -> 3))
Nov 29 06:19:32 compute-0 ceph-mgr[74948]: [progress INFO root] Completed event 878a4358-7d35-4bea-97ea-6a2ffa9735e2 (Updating mgr deployment (+2 -> 3)) in 8 seconds
Nov 29 06:19:32 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 29 06:19:32 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:19:32 compute-0 ceph-mgr[74948]: [cephadm INFO root] Saving service ingress.rgw.default spec with placement count:2
Nov 29 06:19:32 compute-0 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Saving service ingress.rgw.default spec with placement count:2
Nov 29 06:19:32 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0) v1
Nov 29 06:19:32 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:19:32 compute-0 ceph-mgr[74948]: [progress INFO root] update: starting ev 8784e530-6512-4060-945e-12e8ac08b061 (Updating crash deployment (+1 -> 3))
Nov 29 06:19:32 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) v1
Nov 29 06:19:32 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 29 06:19:32 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:19:32 compute-0 nifty_nobel[89172]: Scheduled rgw.rgw update...
Nov 29 06:19:32 compute-0 nifty_nobel[89172]: Scheduled ingress.rgw.default update...
Nov 29 06:19:32 compute-0 systemd[1]: libpod-47d05e73fbf94287d0a8caac7c0649921c2de6c772e7cb9800fff31c2d4d7387.scope: Deactivated successfully.
Nov 29 06:19:32 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Nov 29 06:19:32 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 06:19:32 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:19:32 compute-0 ceph-mgr[74948]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-2 on compute-2
Nov 29 06:19:32 compute-0 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-2 on compute-2
Nov 29 06:19:32 compute-0 podman[89199]: 2025-11-29 06:19:32.493076299 +0000 UTC m=+0.028440133 container died 47d05e73fbf94287d0a8caac7c0649921c2de6c772e7cb9800fff31c2d4d7387 (image=quay.io/ceph/ceph:v18, name=nifty_nobel, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 06:19:32 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 2.1f scrub starts
Nov 29 06:19:32 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 2.1f scrub ok
Nov 29 06:19:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-5080b3df61e3e41960b15e862e8599ed1ecf13fe6e6b480b4ca4bc38e1bb7971-merged.mount: Deactivated successfully.
Nov 29 06:19:32 compute-0 podman[89199]: 2025-11-29 06:19:32.575079197 +0000 UTC m=+0.110443011 container remove 47d05e73fbf94287d0a8caac7c0649921c2de6c772e7cb9800fff31c2d4d7387 (image=quay.io/ceph/ceph:v18, name=nifty_nobel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 06:19:32 compute-0 systemd[1]: libpod-conmon-47d05e73fbf94287d0a8caac7c0649921c2de6c772e7cb9800fff31c2d4d7387.scope: Deactivated successfully.
Nov 29 06:19:32 compute-0 sudo[89155]: pam_unix(sudo:session): session closed for user root
Nov 29 06:19:32 compute-0 ceph-mon[74654]: 2.1e scrub starts
Nov 29 06:19:32 compute-0 ceph-mon[74654]: 2.1e scrub ok
Nov 29 06:19:32 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:19:32 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:19:32 compute-0 ceph-mon[74654]: pgmap v103: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 06:19:32 compute-0 ceph-mon[74654]: from='client.14256 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 06:19:32 compute-0 ceph-mon[74654]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Nov 29 06:19:32 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:19:32 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:19:32 compute-0 ceph-mon[74654]: Saving service ingress.rgw.default spec with placement count:2
Nov 29 06:19:32 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:19:32 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 29 06:19:32 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:19:32 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Nov 29 06:19:32 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:19:32 compute-0 ceph-mon[74654]: Deploying daemon crash.compute-2 on compute-2
Nov 29 06:19:33 compute-0 sshd-session[89196]: Invalid user user from 103.147.159.91 port 52472
Nov 29 06:19:33 compute-0 ceph-mon[74654]: 2.1f scrub starts
Nov 29 06:19:33 compute-0 ceph-mon[74654]: 2.1f scrub ok
Nov 29 06:19:33 compute-0 sshd-session[89196]: Received disconnect from 103.147.159.91 port 52472:11: Bye Bye [preauth]
Nov 29 06:19:33 compute-0 sshd-session[89196]: Disconnected from invalid user user 103.147.159.91 port 52472 [preauth]
Nov 29 06:19:33 compute-0 python3[89289]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_mds.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 06:19:33 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v104: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 06:19:34 compute-0 python3[89360]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764397173.487423-37452-84494872252177/source dest=/tmp/ceph_mds.yml mode=0644 force=True follow=False _original_basename=ceph_mds.yml.j2 checksum=b1f36629bdb347469f4890c95dfdef5abc68c3ae backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:19:34 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 06:19:34 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:19:34 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 06:19:34 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:19:34 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Nov 29 06:19:34 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:19:34 compute-0 ceph-mgr[74948]: [progress INFO root] complete: finished ev 8784e530-6512-4060-945e-12e8ac08b061 (Updating crash deployment (+1 -> 3))
Nov 29 06:19:34 compute-0 ceph-mgr[74948]: [progress INFO root] Completed event 8784e530-6512-4060-945e-12e8ac08b061 (Updating crash deployment (+1 -> 3)) in 2 seconds
Nov 29 06:19:34 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Nov 29 06:19:34 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:19:34 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 06:19:34 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 06:19:34 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 06:19:34 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 06:19:34 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 06:19:34 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:19:34 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 06:19:34 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 06:19:34 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 06:19:34 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:19:34 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 06:19:34 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 06:19:34 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 06:19:34 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:19:34 compute-0 sudo[89374]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:19:34 compute-0 sudo[89374]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:19:34 compute-0 sudo[89374]: pam_unix(sudo:session): session closed for user root
Nov 29 06:19:34 compute-0 ceph-mgr[74948]: [progress INFO root] Writing back 6 completed events
Nov 29 06:19:34 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 29 06:19:34 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:19:34 compute-0 sudo[89410]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:19:34 compute-0 sudo[89410]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:19:34 compute-0 sudo[89410]: pam_unix(sudo:session): session closed for user root
Nov 29 06:19:34 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 2.10 scrub starts
Nov 29 06:19:34 compute-0 sudo[89435]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:19:34 compute-0 sudo[89435]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:19:34 compute-0 sudo[89435]: pam_unix(sudo:session): session closed for user root
Nov 29 06:19:34 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 2.10 scrub ok
Nov 29 06:19:34 compute-0 sudo[89504]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bwcyerukgfbszvsakrnefztpqeewgqoi ; /usr/bin/python3'
Nov 29 06:19:34 compute-0 sudo[89504]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:19:34 compute-0 sudo[89464]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Nov 29 06:19:34 compute-0 sudo[89464]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:19:34 compute-0 python3[89509]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   fs volume create cephfs '--placement=compute-0 compute-1 compute-2 '
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:19:34 compute-0 podman[89518]: 2025-11-29 06:19:34.896424742 +0000 UTC m=+0.050821376 container create 31620dddb9f46df4e47574567a6db28ed6a6f620272d46ccbf253a4b8b5dcd80 (image=quay.io/ceph/ceph:v18, name=reverent_torvalds, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 06:19:34 compute-0 systemd[1]: Started libpod-conmon-31620dddb9f46df4e47574567a6db28ed6a6f620272d46ccbf253a4b8b5dcd80.scope.
Nov 29 06:19:34 compute-0 podman[89518]: 2025-11-29 06:19:34.875867774 +0000 UTC m=+0.030264528 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 06:19:34 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:19:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56bcda562af46f26337908d6e0886a0552aa14d2b9baf091859a0873b55e4018/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:19:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56bcda562af46f26337908d6e0886a0552aa14d2b9baf091859a0873b55e4018/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 29 06:19:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56bcda562af46f26337908d6e0886a0552aa14d2b9baf091859a0873b55e4018/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:19:34 compute-0 podman[89518]: 2025-11-29 06:19:34.99360632 +0000 UTC m=+0.148002974 container init 31620dddb9f46df4e47574567a6db28ed6a6f620272d46ccbf253a4b8b5dcd80 (image=quay.io/ceph/ceph:v18, name=reverent_torvalds, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 29 06:19:35 compute-0 podman[89518]: 2025-11-29 06:19:35.001553606 +0000 UTC m=+0.155950270 container start 31620dddb9f46df4e47574567a6db28ed6a6f620272d46ccbf253a4b8b5dcd80 (image=quay.io/ceph/ceph:v18, name=reverent_torvalds, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 06:19:35 compute-0 podman[89518]: 2025-11-29 06:19:35.00541707 +0000 UTC m=+0.159813694 container attach 31620dddb9f46df4e47574567a6db28ed6a6f620272d46ccbf253a4b8b5dcd80 (image=quay.io/ceph/ceph:v18, name=reverent_torvalds, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 06:19:35 compute-0 podman[89570]: 2025-11-29 06:19:35.158556035 +0000 UTC m=+0.064334476 container create 2971cffbed226a33cfc7b0f1461f01cd4b1fe258ad5004cdbc9e152e2c5784ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_curran, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 29 06:19:35 compute-0 systemd[1]: Started libpod-conmon-2971cffbed226a33cfc7b0f1461f01cd4b1fe258ad5004cdbc9e152e2c5784ad.scope.
Nov 29 06:19:35 compute-0 podman[89570]: 2025-11-29 06:19:35.12223309 +0000 UTC m=+0.028011611 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:19:35 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:19:35 compute-0 ceph-mon[74654]: pgmap v104: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 06:19:35 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:19:35 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:19:35 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:19:35 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:19:35 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 06:19:35 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 06:19:35 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:19:35 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 06:19:35 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:19:35 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 06:19:35 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:19:35 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:19:35 compute-0 ceph-mon[74654]: 2.10 scrub starts
Nov 29 06:19:35 compute-0 ceph-mon[74654]: 2.10 scrub ok
Nov 29 06:19:35 compute-0 podman[89570]: 2025-11-29 06:19:35.259581387 +0000 UTC m=+0.165359918 container init 2971cffbed226a33cfc7b0f1461f01cd4b1fe258ad5004cdbc9e152e2c5784ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_curran, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 06:19:35 compute-0 podman[89570]: 2025-11-29 06:19:35.271278974 +0000 UTC m=+0.177057445 container start 2971cffbed226a33cfc7b0f1461f01cd4b1fe258ad5004cdbc9e152e2c5784ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_curran, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 06:19:35 compute-0 podman[89570]: 2025-11-29 06:19:35.276280822 +0000 UTC m=+0.182059303 container attach 2971cffbed226a33cfc7b0f1461f01cd4b1fe258ad5004cdbc9e152e2c5784ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_curran, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3)
Nov 29 06:19:35 compute-0 nice_curran[89587]: 167 167
Nov 29 06:19:35 compute-0 systemd[1]: libpod-2971cffbed226a33cfc7b0f1461f01cd4b1fe258ad5004cdbc9e152e2c5784ad.scope: Deactivated successfully.
Nov 29 06:19:35 compute-0 podman[89570]: 2025-11-29 06:19:35.278070305 +0000 UTC m=+0.183848786 container died 2971cffbed226a33cfc7b0f1461f01cd4b1fe258ad5004cdbc9e152e2c5784ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_curran, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True)
Nov 29 06:19:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-5fb694ba32fff9729ab004e5b26fc1098cc241cdb47b96d9bfc4fedcb78256ad-merged.mount: Deactivated successfully.
Nov 29 06:19:35 compute-0 podman[89570]: 2025-11-29 06:19:35.336134454 +0000 UTC m=+0.241912895 container remove 2971cffbed226a33cfc7b0f1461f01cd4b1fe258ad5004cdbc9e152e2c5784ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_curran, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 29 06:19:35 compute-0 systemd[1]: libpod-conmon-2971cffbed226a33cfc7b0f1461f01cd4b1fe258ad5004cdbc9e152e2c5784ad.scope: Deactivated successfully.
Nov 29 06:19:35 compute-0 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.14262 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 compute-1 compute-2 ", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 06:19:35 compute-0 ceph-mgr[74948]: [volumes INFO volumes.module] Starting _cmd_fs_volume_create(name:cephfs, placement:compute-0 compute-1 compute-2 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Nov 29 06:19:35 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} v 0) v1
Nov 29 06:19:35 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Nov 29 06:19:35 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} v 0) v1
Nov 29 06:19:35 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Nov 29 06:19:35 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} v 0) v1
Nov 29 06:19:35 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Nov 29 06:19:35 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e29 do_prune osdmap full prune enabled
Nov 29 06:19:35 compute-0 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mon-compute-0[74650]: 2025-11-29T06:19:35.588+0000 7fe455879640 -1 log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Nov 29 06:19:35 compute-0 ceph-mon[74654]: log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Nov 29 06:19:35 compute-0 ceph-mon[74654]: log_channel(cluster) log [WRN] : Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Nov 29 06:19:35 compute-0 podman[89628]: 2025-11-29 06:19:35.552042118 +0000 UTC m=+0.038069758 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:19:35 compute-0 podman[89628]: 2025-11-29 06:19:35.667292051 +0000 UTC m=+0.153319621 container create ad247c9dcb3742a3aa50756a2847030a9d6da9cc0123086b33fc7f6010d4744e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_blackburn, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 06:19:35 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Nov 29 06:19:35 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).mds e2 new map
Nov 29 06:19:35 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).mds e2 print_map
                                           e2
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        2
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-11-29T06:19:35.588785+0000
                                           modified        2025-11-29T06:19:35.589013+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        
                                           up        {}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                            
                                            
Nov 29 06:19:35 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e30 e30: 2 total, 2 up, 2 in
Nov 29 06:19:35 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e30: 2 total, 2 up, 2 in
Nov 29 06:19:35 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : fsmap cephfs:0
Nov 29 06:19:35 compute-0 ceph-mgr[74948]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Nov 29 06:19:35 compute-0 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Nov 29 06:19:35 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Nov 29 06:19:35 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:19:35 compute-0 ceph-mgr[74948]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_create(name:cephfs, placement:compute-0 compute-1 compute-2 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Nov 29 06:19:35 compute-0 podman[89518]: 2025-11-29 06:19:35.715680404 +0000 UTC m=+0.870077078 container died 31620dddb9f46df4e47574567a6db28ed6a6f620272d46ccbf253a4b8b5dcd80 (image=quay.io/ceph/ceph:v18, name=reverent_torvalds, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 29 06:19:35 compute-0 systemd[1]: Started libpod-conmon-ad247c9dcb3742a3aa50756a2847030a9d6da9cc0123086b33fc7f6010d4744e.scope.
Nov 29 06:19:35 compute-0 systemd[1]: libpod-31620dddb9f46df4e47574567a6db28ed6a6f620272d46ccbf253a4b8b5dcd80.scope: Deactivated successfully.
Nov 29 06:19:35 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:19:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-56bcda562af46f26337908d6e0886a0552aa14d2b9baf091859a0873b55e4018-merged.mount: Deactivated successfully.
Nov 29 06:19:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41d61b98b5d187db383c801b9b72b96ea41e1d428732325a121362365fa1a890/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 06:19:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41d61b98b5d187db383c801b9b72b96ea41e1d428732325a121362365fa1a890/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:19:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41d61b98b5d187db383c801b9b72b96ea41e1d428732325a121362365fa1a890/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:19:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41d61b98b5d187db383c801b9b72b96ea41e1d428732325a121362365fa1a890/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 06:19:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41d61b98b5d187db383c801b9b72b96ea41e1d428732325a121362365fa1a890/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 06:19:35 compute-0 podman[89518]: 2025-11-29 06:19:35.827942079 +0000 UTC m=+0.982338723 container remove 31620dddb9f46df4e47574567a6db28ed6a6f620272d46ccbf253a4b8b5dcd80 (image=quay.io/ceph/ceph:v18, name=reverent_torvalds, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 06:19:35 compute-0 podman[89628]: 2025-11-29 06:19:35.834942456 +0000 UTC m=+0.320970016 container init ad247c9dcb3742a3aa50756a2847030a9d6da9cc0123086b33fc7f6010d4744e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_blackburn, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 29 06:19:35 compute-0 podman[89628]: 2025-11-29 06:19:35.845374055 +0000 UTC m=+0.331401645 container start ad247c9dcb3742a3aa50756a2847030a9d6da9cc0123086b33fc7f6010d4744e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_blackburn, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 06:19:35 compute-0 systemd[1]: libpod-conmon-31620dddb9f46df4e47574567a6db28ed6a6f620272d46ccbf253a4b8b5dcd80.scope: Deactivated successfully.
Nov 29 06:19:35 compute-0 podman[89628]: 2025-11-29 06:19:35.850679272 +0000 UTC m=+0.336706822 container attach ad247c9dcb3742a3aa50756a2847030a9d6da9cc0123086b33fc7f6010d4744e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_blackburn, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 06:19:35 compute-0 sudo[89504]: pam_unix(sudo:session): session closed for user root
Nov 29 06:19:35 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v106: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 06:19:36 compute-0 sudo[89688]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kteisafaellvtdiuskldzkjyomquoilo ; /usr/bin/python3'
Nov 29 06:19:36 compute-0 sudo[89688]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:19:36 compute-0 python3[89690]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:19:36 compute-0 ceph-mon[74654]: from='client.14262 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 compute-1 compute-2 ", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 06:19:36 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Nov 29 06:19:36 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Nov 29 06:19:36 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Nov 29 06:19:36 compute-0 ceph-mon[74654]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Nov 29 06:19:36 compute-0 ceph-mon[74654]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Nov 29 06:19:36 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Nov 29 06:19:36 compute-0 ceph-mon[74654]: osdmap e30: 2 total, 2 up, 2 in
Nov 29 06:19:36 compute-0 ceph-mon[74654]: fsmap cephfs:0
Nov 29 06:19:36 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:19:36 compute-0 podman[89691]: 2025-11-29 06:19:36.350077672 +0000 UTC m=+0.073347833 container create 4b804124fa9347391453ea0123ded4916cb8ff3b1010e6a5c310e014ae5d125c (image=quay.io/ceph/ceph:v18, name=objective_hopper, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 06:19:36 compute-0 systemd[1]: Started libpod-conmon-4b804124fa9347391453ea0123ded4916cb8ff3b1010e6a5c310e014ae5d125c.scope.
Nov 29 06:19:36 compute-0 podman[89691]: 2025-11-29 06:19:36.310572752 +0000 UTC m=+0.033842953 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 06:19:36 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:19:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8642ed8e0ba8cd2cb266cc29d22e1a9b1a0a44d17bf833660e03b94e86b34201/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:19:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8642ed8e0ba8cd2cb266cc29d22e1a9b1a0a44d17bf833660e03b94e86b34201/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:19:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8642ed8e0ba8cd2cb266cc29d22e1a9b1a0a44d17bf833660e03b94e86b34201/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 29 06:19:36 compute-0 podman[89691]: 2025-11-29 06:19:36.466201121 +0000 UTC m=+0.189471282 container init 4b804124fa9347391453ea0123ded4916cb8ff3b1010e6a5c310e014ae5d125c (image=quay.io/ceph/ceph:v18, name=objective_hopper, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 06:19:36 compute-0 podman[89691]: 2025-11-29 06:19:36.478438673 +0000 UTC m=+0.201708814 container start 4b804124fa9347391453ea0123ded4916cb8ff3b1010e6a5c310e014ae5d125c (image=quay.io/ceph/ceph:v18, name=objective_hopper, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 29 06:19:36 compute-0 podman[89691]: 2025-11-29 06:19:36.48237735 +0000 UTC m=+0.205647491 container attach 4b804124fa9347391453ea0123ded4916cb8ff3b1010e6a5c310e014ae5d125c (image=quay.io/ceph/ceph:v18, name=objective_hopper, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 29 06:19:36 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd new", "uuid": "f86a06f9-a09f-46de-8440-929a842d2c66"} v 0) v1
Nov 29 06:19:36 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "f86a06f9-a09f-46de-8440-929a842d2c66"}]: dispatch
Nov 29 06:19:36 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e30 do_prune osdmap full prune enabled
Nov 29 06:19:36 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "f86a06f9-a09f-46de-8440-929a842d2c66"}]': finished
Nov 29 06:19:36 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e31 e31: 3 total, 2 up, 3 in
Nov 29 06:19:36 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e31: 3 total, 2 up, 3 in
Nov 29 06:19:36 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 06:19:36 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 06:19:36 compute-0 ceph-mgr[74948]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 06:19:36 compute-0 sharp_blackburn[89653]: --> passed data devices: 0 physical, 1 LVM
Nov 29 06:19:36 compute-0 sharp_blackburn[89653]: --> relative data size: 1.0
Nov 29 06:19:36 compute-0 sharp_blackburn[89653]: --> All data devices are unavailable
Nov 29 06:19:36 compute-0 systemd[1]: libpod-ad247c9dcb3742a3aa50756a2847030a9d6da9cc0123086b33fc7f6010d4744e.scope: Deactivated successfully.
Nov 29 06:19:36 compute-0 conmon[89653]: conmon ad247c9dcb3742a3aa50 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ad247c9dcb3742a3aa50756a2847030a9d6da9cc0123086b33fc7f6010d4744e.scope/container/memory.events
Nov 29 06:19:36 compute-0 podman[89628]: 2025-11-29 06:19:36.778540591 +0000 UTC m=+1.264568141 container died ad247c9dcb3742a3aa50756a2847030a9d6da9cc0123086b33fc7f6010d4744e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_blackburn, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS)
Nov 29 06:19:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-41d61b98b5d187db383c801b9b72b96ea41e1d428732325a121362365fa1a890-merged.mount: Deactivated successfully.
Nov 29 06:19:36 compute-0 podman[89628]: 2025-11-29 06:19:36.830269983 +0000 UTC m=+1.316297523 container remove ad247c9dcb3742a3aa50756a2847030a9d6da9cc0123086b33fc7f6010d4744e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_blackburn, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 06:19:36 compute-0 systemd[1]: libpod-conmon-ad247c9dcb3742a3aa50756a2847030a9d6da9cc0123086b33fc7f6010d4744e.scope: Deactivated successfully.
Nov 29 06:19:36 compute-0 sudo[89464]: pam_unix(sudo:session): session closed for user root
Nov 29 06:19:36 compute-0 sudo[89752]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:19:36 compute-0 sudo[89752]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:19:36 compute-0 sudo[89752]: pam_unix(sudo:session): session closed for user root
Nov 29 06:19:36 compute-0 sudo[89777]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:19:36 compute-0 sudo[89777]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:19:36 compute-0 sudo[89777]: pam_unix(sudo:session): session closed for user root
Nov 29 06:19:37 compute-0 sudo[89802]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:19:37 compute-0 sudo[89802]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:19:37 compute-0 sudo[89802]: pam_unix(sudo:session): session closed for user root
Nov 29 06:19:37 compute-0 sudo[89827]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -- lvm list --format json
Nov 29 06:19:37 compute-0 sudo[89827]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:19:37 compute-0 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.14268 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 06:19:37 compute-0 ceph-mgr[74948]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Nov 29 06:19:37 compute-0 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Nov 29 06:19:37 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Nov 29 06:19:37 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:19:37 compute-0 objective_hopper[89706]: Scheduled mds.cephfs update...
Nov 29 06:19:37 compute-0 systemd[1]: libpod-4b804124fa9347391453ea0123ded4916cb8ff3b1010e6a5c310e014ae5d125c.scope: Deactivated successfully.
Nov 29 06:19:37 compute-0 podman[89691]: 2025-11-29 06:19:37.135638635 +0000 UTC m=+0.858908786 container died 4b804124fa9347391453ea0123ded4916cb8ff3b1010e6a5c310e014ae5d125c (image=quay.io/ceph/ceph:v18, name=objective_hopper, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 06:19:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-8642ed8e0ba8cd2cb266cc29d22e1a9b1a0a44d17bf833660e03b94e86b34201-merged.mount: Deactivated successfully.
Nov 29 06:19:37 compute-0 podman[89691]: 2025-11-29 06:19:37.185588414 +0000 UTC m=+0.908858535 container remove 4b804124fa9347391453ea0123ded4916cb8ff3b1010e6a5c310e014ae5d125c (image=quay.io/ceph/ceph:v18, name=objective_hopper, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 29 06:19:37 compute-0 systemd[1]: libpod-conmon-4b804124fa9347391453ea0123ded4916cb8ff3b1010e6a5c310e014ae5d125c.scope: Deactivated successfully.
Nov 29 06:19:37 compute-0 sudo[89688]: pam_unix(sudo:session): session closed for user root
Nov 29 06:19:37 compute-0 ceph-mon[74654]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Nov 29 06:19:37 compute-0 ceph-mon[74654]: pgmap v106: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 06:19:37 compute-0 ceph-mon[74654]: from='client.? 192.168.122.102:0/2624547066' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "f86a06f9-a09f-46de-8440-929a842d2c66"}]: dispatch
Nov 29 06:19:37 compute-0 ceph-mon[74654]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "f86a06f9-a09f-46de-8440-929a842d2c66"}]: dispatch
Nov 29 06:19:37 compute-0 ceph-mon[74654]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "f86a06f9-a09f-46de-8440-929a842d2c66"}]': finished
Nov 29 06:19:37 compute-0 ceph-mon[74654]: osdmap e31: 3 total, 2 up, 3 in
Nov 29 06:19:37 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 06:19:37 compute-0 ceph-mon[74654]: 2.1c scrub starts
Nov 29 06:19:37 compute-0 ceph-mon[74654]: 2.1c scrub ok
Nov 29 06:19:37 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:19:37 compute-0 ceph-mon[74654]: from='client.? 192.168.122.102:0/2894938433' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Nov 29 06:19:37 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e31 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 06:19:37 compute-0 podman[89906]: 2025-11-29 06:19:37.423096748 +0000 UTC m=+0.054039981 container create 7d713f0ff7a157cd105ebacaf73b8ce28d087bc93e93d187b32b51ed4270b15e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_aryabhata, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 29 06:19:37 compute-0 systemd[1]: Started libpod-conmon-7d713f0ff7a157cd105ebacaf73b8ce28d087bc93e93d187b32b51ed4270b15e.scope.
Nov 29 06:19:37 compute-0 podman[89906]: 2025-11-29 06:19:37.397027846 +0000 UTC m=+0.027971109 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:19:37 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:19:37 compute-0 podman[89906]: 2025-11-29 06:19:37.509724984 +0000 UTC m=+0.140668197 container init 7d713f0ff7a157cd105ebacaf73b8ce28d087bc93e93d187b32b51ed4270b15e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_aryabhata, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 29 06:19:37 compute-0 podman[89906]: 2025-11-29 06:19:37.521452841 +0000 UTC m=+0.152396044 container start 7d713f0ff7a157cd105ebacaf73b8ce28d087bc93e93d187b32b51ed4270b15e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_aryabhata, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 29 06:19:37 compute-0 interesting_aryabhata[89922]: 167 167
Nov 29 06:19:37 compute-0 systemd[1]: libpod-7d713f0ff7a157cd105ebacaf73b8ce28d087bc93e93d187b32b51ed4270b15e.scope: Deactivated successfully.
Nov 29 06:19:37 compute-0 podman[89906]: 2025-11-29 06:19:37.524603244 +0000 UTC m=+0.155546447 container attach 7d713f0ff7a157cd105ebacaf73b8ce28d087bc93e93d187b32b51ed4270b15e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_aryabhata, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 29 06:19:37 compute-0 podman[89906]: 2025-11-29 06:19:37.526207582 +0000 UTC m=+0.157150825 container died 7d713f0ff7a157cd105ebacaf73b8ce28d087bc93e93d187b32b51ed4270b15e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_aryabhata, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 06:19:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-cc479b3bf604959897f332ae3979339e885dcbc5f4068eeb0a75ccf766932956-merged.mount: Deactivated successfully.
Nov 29 06:19:37 compute-0 podman[89906]: 2025-11-29 06:19:37.562386053 +0000 UTC m=+0.193329266 container remove 7d713f0ff7a157cd105ebacaf73b8ce28d087bc93e93d187b32b51ed4270b15e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_aryabhata, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 29 06:19:37 compute-0 systemd[1]: libpod-conmon-7d713f0ff7a157cd105ebacaf73b8ce28d087bc93e93d187b32b51ed4270b15e.scope: Deactivated successfully.
Nov 29 06:19:37 compute-0 podman[89987]: 2025-11-29 06:19:37.782931704 +0000 UTC m=+0.065584433 container create 0b2de8238d963afc4b403a79dab7f62f97cb70bd7318526f5bc1e9b327df5ba3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_robinson, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 29 06:19:37 compute-0 systemd[1]: Started libpod-conmon-0b2de8238d963afc4b403a79dab7f62f97cb70bd7318526f5bc1e9b327df5ba3.scope.
Nov 29 06:19:37 compute-0 sudo[90037]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wfuvidgyllsvpieqkmtivvpsumqgpnvp ; /usr/bin/python3'
Nov 29 06:19:37 compute-0 sudo[90037]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:19:37 compute-0 podman[89987]: 2025-11-29 06:19:37.760593233 +0000 UTC m=+0.043245982 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:19:37 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:19:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5e173f2e3db63b46924b7bb1e38407b2ce7be589638601bf3b70d8b6587dcb2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 06:19:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5e173f2e3db63b46924b7bb1e38407b2ce7be589638601bf3b70d8b6587dcb2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:19:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5e173f2e3db63b46924b7bb1e38407b2ce7be589638601bf3b70d8b6587dcb2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:19:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5e173f2e3db63b46924b7bb1e38407b2ce7be589638601bf3b70d8b6587dcb2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 06:19:37 compute-0 podman[89987]: 2025-11-29 06:19:37.894808418 +0000 UTC m=+0.177461127 container init 0b2de8238d963afc4b403a79dab7f62f97cb70bd7318526f5bc1e9b327df5ba3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_robinson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 29 06:19:37 compute-0 podman[89987]: 2025-11-29 06:19:37.904610858 +0000 UTC m=+0.187263577 container start 0b2de8238d963afc4b403a79dab7f62f97cb70bd7318526f5bc1e9b327df5ba3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_robinson, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 06:19:37 compute-0 podman[89987]: 2025-11-29 06:19:37.919932742 +0000 UTC m=+0.202585431 container attach 0b2de8238d963afc4b403a79dab7f62f97cb70bd7318526f5bc1e9b327df5ba3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_robinson, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 06:19:37 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v108: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 06:19:37 compute-0 python3[90041]: ansible-ansible.legacy.stat Invoked with path=/etc/ceph/ceph.client.openstack.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 06:19:38 compute-0 sudo[90037]: pam_unix(sudo:session): session closed for user root
Nov 29 06:19:38 compute-0 sudo[90115]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wrgoeohnavotbeiziuuxrqdgcjvoulhz ; /usr/bin/python3'
Nov 29 06:19:38 compute-0 sudo[90115]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:19:38 compute-0 python3[90117]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764397177.6429858-37482-104583698948214/source dest=/etc/ceph/ceph.client.openstack.keyring mode=0644 force=True owner=167 group=167 follow=False _original_basename=ceph_key.j2 checksum=d5bc1b1c0617b147c8e3e13846b179249a244079 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:19:38 compute-0 sudo[90115]: pam_unix(sudo:session): session closed for user root
Nov 29 06:19:38 compute-0 ceph-mon[74654]: from='client.14268 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 06:19:38 compute-0 ceph-mon[74654]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Nov 29 06:19:38 compute-0 peaceful_robinson[90039]: {
Nov 29 06:19:38 compute-0 peaceful_robinson[90039]:     "1": [
Nov 29 06:19:38 compute-0 peaceful_robinson[90039]:         {
Nov 29 06:19:38 compute-0 peaceful_robinson[90039]:             "devices": [
Nov 29 06:19:38 compute-0 peaceful_robinson[90039]:                 "/dev/loop3"
Nov 29 06:19:38 compute-0 peaceful_robinson[90039]:             ],
Nov 29 06:19:38 compute-0 peaceful_robinson[90039]:             "lv_name": "ceph_lv0",
Nov 29 06:19:38 compute-0 peaceful_robinson[90039]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 06:19:38 compute-0 peaceful_robinson[90039]:             "lv_size": "7511998464",
Nov 29 06:19:38 compute-0 peaceful_robinson[90039]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=336ec58c-893b-528f-a0c1-6ed1196bc047,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=91f280f1-e534-4adc-bf70-98711580c2dd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 06:19:38 compute-0 peaceful_robinson[90039]:             "lv_uuid": "G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP",
Nov 29 06:19:38 compute-0 peaceful_robinson[90039]:             "name": "ceph_lv0",
Nov 29 06:19:38 compute-0 peaceful_robinson[90039]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 06:19:38 compute-0 peaceful_robinson[90039]:             "tags": {
Nov 29 06:19:38 compute-0 peaceful_robinson[90039]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 06:19:38 compute-0 peaceful_robinson[90039]:                 "ceph.block_uuid": "G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP",
Nov 29 06:19:38 compute-0 peaceful_robinson[90039]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 06:19:38 compute-0 peaceful_robinson[90039]:                 "ceph.cluster_fsid": "336ec58c-893b-528f-a0c1-6ed1196bc047",
Nov 29 06:19:38 compute-0 peaceful_robinson[90039]:                 "ceph.cluster_name": "ceph",
Nov 29 06:19:38 compute-0 peaceful_robinson[90039]:                 "ceph.crush_device_class": "",
Nov 29 06:19:38 compute-0 peaceful_robinson[90039]:                 "ceph.encrypted": "0",
Nov 29 06:19:38 compute-0 peaceful_robinson[90039]:                 "ceph.osd_fsid": "91f280f1-e534-4adc-bf70-98711580c2dd",
Nov 29 06:19:38 compute-0 peaceful_robinson[90039]:                 "ceph.osd_id": "1",
Nov 29 06:19:38 compute-0 peaceful_robinson[90039]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 06:19:38 compute-0 peaceful_robinson[90039]:                 "ceph.type": "block",
Nov 29 06:19:38 compute-0 peaceful_robinson[90039]:                 "ceph.vdo": "0"
Nov 29 06:19:38 compute-0 peaceful_robinson[90039]:             },
Nov 29 06:19:38 compute-0 peaceful_robinson[90039]:             "type": "block",
Nov 29 06:19:38 compute-0 peaceful_robinson[90039]:             "vg_name": "ceph_vg0"
Nov 29 06:19:38 compute-0 peaceful_robinson[90039]:         }
Nov 29 06:19:38 compute-0 peaceful_robinson[90039]:     ]
Nov 29 06:19:38 compute-0 peaceful_robinson[90039]: }
Nov 29 06:19:38 compute-0 systemd[1]: libpod-0b2de8238d963afc4b403a79dab7f62f97cb70bd7318526f5bc1e9b327df5ba3.scope: Deactivated successfully.
Nov 29 06:19:38 compute-0 podman[89987]: 2025-11-29 06:19:38.817316788 +0000 UTC m=+1.099969487 container died 0b2de8238d963afc4b403a79dab7f62f97cb70bd7318526f5bc1e9b327df5ba3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_robinson, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Nov 29 06:19:38 compute-0 sudo[90169]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fyjlazzzxnxhadzmjnvxsztsdujhwrpj ; /usr/bin/python3'
Nov 29 06:19:38 compute-0 sudo[90169]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:19:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-c5e173f2e3db63b46924b7bb1e38407b2ce7be589638601bf3b70d8b6587dcb2-merged.mount: Deactivated successfully.
Nov 29 06:19:38 compute-0 podman[89987]: 2025-11-29 06:19:38.879503889 +0000 UTC m=+1.162156578 container remove 0b2de8238d963afc4b403a79dab7f62f97cb70bd7318526f5bc1e9b327df5ba3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_robinson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 29 06:19:38 compute-0 systemd[1]: libpod-conmon-0b2de8238d963afc4b403a79dab7f62f97cb70bd7318526f5bc1e9b327df5ba3.scope: Deactivated successfully.
Nov 29 06:19:38 compute-0 sudo[89827]: pam_unix(sudo:session): session closed for user root
Nov 29 06:19:38 compute-0 sudo[90184]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:19:38 compute-0 sudo[90184]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:19:38 compute-0 sudo[90184]: pam_unix(sudo:session): session closed for user root
Nov 29 06:19:38 compute-0 python3[90183]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth import -i /etc/ceph/ceph.client.openstack.keyring _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:19:39 compute-0 sudo[90209]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:19:39 compute-0 sudo[90209]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:19:39 compute-0 sudo[90209]: pam_unix(sudo:session): session closed for user root
Nov 29 06:19:39 compute-0 podman[90211]: 2025-11-29 06:19:39.062358594 +0000 UTC m=+0.048557999 container create 07edeb099061320435a696fb2151be785c91b589ff020803a866e3902e3543ef (image=quay.io/ceph/ceph:v18, name=awesome_edison, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 29 06:19:39 compute-0 systemd[1]: Started libpod-conmon-07edeb099061320435a696fb2151be785c91b589ff020803a866e3902e3543ef.scope.
Nov 29 06:19:39 compute-0 podman[90211]: 2025-11-29 06:19:39.035338204 +0000 UTC m=+0.021537639 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 06:19:39 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:19:39 compute-0 sudo[90247]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:19:39 compute-0 sudo[90247]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:19:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0a0eb1c6a28e41f4fb97f4a0d9c2bf15116b30a3ff29d7c3b08a255da837e72/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:19:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0a0eb1c6a28e41f4fb97f4a0d9c2bf15116b30a3ff29d7c3b08a255da837e72/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:19:39 compute-0 sudo[90247]: pam_unix(sudo:session): session closed for user root
Nov 29 06:19:39 compute-0 podman[90211]: 2025-11-29 06:19:39.168142857 +0000 UTC m=+0.154342272 container init 07edeb099061320435a696fb2151be785c91b589ff020803a866e3902e3543ef (image=quay.io/ceph/ceph:v18, name=awesome_edison, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 29 06:19:39 compute-0 podman[90211]: 2025-11-29 06:19:39.178648288 +0000 UTC m=+0.164847683 container start 07edeb099061320435a696fb2151be785c91b589ff020803a866e3902e3543ef (image=quay.io/ceph/ceph:v18, name=awesome_edison, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 06:19:39 compute-0 podman[90211]: 2025-11-29 06:19:39.182546494 +0000 UTC m=+0.168745889 container attach 07edeb099061320435a696fb2151be785c91b589ff020803a866e3902e3543ef (image=quay.io/ceph/ceph:v18, name=awesome_edison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True)
Nov 29 06:19:39 compute-0 sudo[90277]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -- raw list --format json
Nov 29 06:19:39 compute-0 sudo[90277]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:19:39 compute-0 podman[90354]: 2025-11-29 06:19:39.558365183 +0000 UTC m=+0.047829567 container create 63da6322881d67927e5b312b3aa4f5e7b97ed9d208d4be681cc8ecea6c2e5055 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_carson, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 06:19:39 compute-0 systemd[1]: Started libpod-conmon-63da6322881d67927e5b312b3aa4f5e7b97ed9d208d4be681cc8ecea6c2e5055.scope.
Nov 29 06:19:39 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:19:39 compute-0 podman[90354]: 2025-11-29 06:19:39.62441614 +0000 UTC m=+0.113880544 container init 63da6322881d67927e5b312b3aa4f5e7b97ed9d208d4be681cc8ecea6c2e5055 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_carson, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 06:19:39 compute-0 podman[90354]: 2025-11-29 06:19:39.630320064 +0000 UTC m=+0.119784468 container start 63da6322881d67927e5b312b3aa4f5e7b97ed9d208d4be681cc8ecea6c2e5055 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_carson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 29 06:19:39 compute-0 podman[90354]: 2025-11-29 06:19:39.633796447 +0000 UTC m=+0.123260911 container attach 63da6322881d67927e5b312b3aa4f5e7b97ed9d208d4be681cc8ecea6c2e5055 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_carson, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 29 06:19:39 compute-0 gallant_carson[90380]: 167 167
Nov 29 06:19:39 compute-0 systemd[1]: libpod-63da6322881d67927e5b312b3aa4f5e7b97ed9d208d4be681cc8ecea6c2e5055.scope: Deactivated successfully.
Nov 29 06:19:39 compute-0 podman[90354]: 2025-11-29 06:19:39.635115296 +0000 UTC m=+0.124579690 container died 63da6322881d67927e5b312b3aa4f5e7b97ed9d208d4be681cc8ecea6c2e5055 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_carson, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 06:19:39 compute-0 podman[90354]: 2025-11-29 06:19:39.544462972 +0000 UTC m=+0.033927386 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:19:39 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 2.9 scrub starts
Nov 29 06:19:39 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 2.9 scrub ok
Nov 29 06:19:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-6c7396892fd46f12197eba32e357620c4e19b71aa9c00346ee76b0764c6abafc-merged.mount: Deactivated successfully.
Nov 29 06:19:39 compute-0 podman[90354]: 2025-11-29 06:19:39.678917294 +0000 UTC m=+0.168381698 container remove 63da6322881d67927e5b312b3aa4f5e7b97ed9d208d4be681cc8ecea6c2e5055 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_carson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 29 06:19:39 compute-0 systemd[1]: libpod-conmon-63da6322881d67927e5b312b3aa4f5e7b97ed9d208d4be681cc8ecea6c2e5055.scope: Deactivated successfully.
Nov 29 06:19:39 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth import"} v 0) v1
Nov 29 06:19:39 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/713391435' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Nov 29 06:19:39 compute-0 ceph-mon[74654]: pgmap v108: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 06:19:39 compute-0 podman[90405]: 2025-11-29 06:19:39.867840628 +0000 UTC m=+0.040185971 container create 8bccaa9050bdf40d55c7768703f2c11a785c306e2797d643678bdc1212a11429 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_einstein, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 29 06:19:39 compute-0 systemd[1]: Started libpod-conmon-8bccaa9050bdf40d55c7768703f2c11a785c306e2797d643678bdc1212a11429.scope.
Nov 29 06:19:39 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:19:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d62da68cb08238a8ed465ad7586860a65658daef13707a7787ba5ebca107ee60/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 06:19:39 compute-0 podman[90405]: 2025-11-29 06:19:39.849339311 +0000 UTC m=+0.021684664 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:19:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d62da68cb08238a8ed465ad7586860a65658daef13707a7787ba5ebca107ee60/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:19:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d62da68cb08238a8ed465ad7586860a65658daef13707a7787ba5ebca107ee60/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:19:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d62da68cb08238a8ed465ad7586860a65658daef13707a7787ba5ebca107ee60/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 06:19:39 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v109: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 06:19:40 compute-0 podman[90405]: 2025-11-29 06:19:40.036149263 +0000 UTC m=+0.208494636 container init 8bccaa9050bdf40d55c7768703f2c11a785c306e2797d643678bdc1212a11429 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_einstein, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 29 06:19:40 compute-0 podman[90405]: 2025-11-29 06:19:40.048416976 +0000 UTC m=+0.220762339 container start 8bccaa9050bdf40d55c7768703f2c11a785c306e2797d643678bdc1212a11429 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_einstein, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 29 06:19:40 compute-0 podman[90405]: 2025-11-29 06:19:40.072715096 +0000 UTC m=+0.245060419 container attach 8bccaa9050bdf40d55c7768703f2c11a785c306e2797d643678bdc1212a11429 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_einstein, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 29 06:19:40 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/713391435' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Nov 29 06:19:40 compute-0 podman[90211]: 2025-11-29 06:19:40.655862255 +0000 UTC m=+1.642061650 container died 07edeb099061320435a696fb2151be785c91b589ff020803a866e3902e3543ef (image=quay.io/ceph/ceph:v18, name=awesome_edison, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 06:19:40 compute-0 systemd[1]: libpod-07edeb099061320435a696fb2151be785c91b589ff020803a866e3902e3543ef.scope: Deactivated successfully.
Nov 29 06:19:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-e0a0eb1c6a28e41f4fb97f4a0d9c2bf15116b30a3ff29d7c3b08a255da837e72-merged.mount: Deactivated successfully.
Nov 29 06:19:40 compute-0 podman[90211]: 2025-11-29 06:19:40.712650936 +0000 UTC m=+1.698850321 container remove 07edeb099061320435a696fb2151be785c91b589ff020803a866e3902e3543ef (image=quay.io/ceph/ceph:v18, name=awesome_edison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 29 06:19:40 compute-0 systemd[1]: libpod-conmon-07edeb099061320435a696fb2151be785c91b589ff020803a866e3902e3543ef.scope: Deactivated successfully.
Nov 29 06:19:40 compute-0 sudo[90169]: pam_unix(sudo:session): session closed for user root
Nov 29 06:19:40 compute-0 ceph-mon[74654]: 2.9 scrub starts
Nov 29 06:19:40 compute-0 ceph-mon[74654]: 2.9 scrub ok
Nov 29 06:19:40 compute-0 ceph-mon[74654]: from='client.? 192.168.122.100:0/713391435' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Nov 29 06:19:40 compute-0 ceph-mon[74654]: pgmap v109: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 06:19:40 compute-0 ceph-mon[74654]: 2.1d scrub starts
Nov 29 06:19:40 compute-0 ceph-mon[74654]: 2.1d scrub ok
Nov 29 06:19:40 compute-0 ceph-mon[74654]: from='client.? 192.168.122.100:0/713391435' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Nov 29 06:19:40 compute-0 brave_einstein[90421]: {
Nov 29 06:19:40 compute-0 brave_einstein[90421]:     "91f280f1-e534-4adc-bf70-98711580c2dd": {
Nov 29 06:19:40 compute-0 brave_einstein[90421]:         "ceph_fsid": "336ec58c-893b-528f-a0c1-6ed1196bc047",
Nov 29 06:19:40 compute-0 brave_einstein[90421]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 06:19:40 compute-0 brave_einstein[90421]:         "osd_id": 1,
Nov 29 06:19:40 compute-0 brave_einstein[90421]:         "osd_uuid": "91f280f1-e534-4adc-bf70-98711580c2dd",
Nov 29 06:19:40 compute-0 brave_einstein[90421]:         "type": "bluestore"
Nov 29 06:19:40 compute-0 brave_einstein[90421]:     }
Nov 29 06:19:40 compute-0 brave_einstein[90421]: }
Nov 29 06:19:40 compute-0 systemd[1]: libpod-8bccaa9050bdf40d55c7768703f2c11a785c306e2797d643678bdc1212a11429.scope: Deactivated successfully.
Nov 29 06:19:40 compute-0 podman[90405]: 2025-11-29 06:19:40.881802066 +0000 UTC m=+1.054147399 container died 8bccaa9050bdf40d55c7768703f2c11a785c306e2797d643678bdc1212a11429 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_einstein, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 06:19:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-d62da68cb08238a8ed465ad7586860a65658daef13707a7787ba5ebca107ee60-merged.mount: Deactivated successfully.
Nov 29 06:19:40 compute-0 podman[90405]: 2025-11-29 06:19:40.933477596 +0000 UTC m=+1.105822919 container remove 8bccaa9050bdf40d55c7768703f2c11a785c306e2797d643678bdc1212a11429 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_einstein, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 29 06:19:40 compute-0 systemd[1]: libpod-conmon-8bccaa9050bdf40d55c7768703f2c11a785c306e2797d643678bdc1212a11429.scope: Deactivated successfully.
Nov 29 06:19:40 compute-0 sudo[90277]: pam_unix(sudo:session): session closed for user root
Nov 29 06:19:40 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 06:19:40 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:19:40 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 06:19:40 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:19:41 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 06:19:41 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:19:41 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 06:19:41 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:19:41 compute-0 sudo[90491]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dzfjszqamegoyuowozuwmrbebedyifax ; /usr/bin/python3'
Nov 29 06:19:41 compute-0 sudo[90491]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:19:41 compute-0 python3[90493]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .monmap.num_mons _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:19:41 compute-0 podman[90495]: 2025-11-29 06:19:41.565225705 +0000 UTC m=+0.042908962 container create e1f8a52f6ec7d42b84ebd1b4716dd70bdd61cd647aa5a2e31ce20935a8938aba (image=quay.io/ceph/ceph:v18, name=elegant_newton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 29 06:19:41 compute-0 systemd[1]: Started libpod-conmon-e1f8a52f6ec7d42b84ebd1b4716dd70bdd61cd647aa5a2e31ce20935a8938aba.scope.
Nov 29 06:19:41 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:19:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5a343d228e3c0b5386b8c41f6ce98f9abff635d51d36911c476b714f1bf801a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:19:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5a343d228e3c0b5386b8c41f6ce98f9abff635d51d36911c476b714f1bf801a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:19:41 compute-0 podman[90495]: 2025-11-29 06:19:41.63327137 +0000 UTC m=+0.110954677 container init e1f8a52f6ec7d42b84ebd1b4716dd70bdd61cd647aa5a2e31ce20935a8938aba (image=quay.io/ceph/ceph:v18, name=elegant_newton, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 06:19:41 compute-0 podman[90495]: 2025-11-29 06:19:41.640642038 +0000 UTC m=+0.118325315 container start e1f8a52f6ec7d42b84ebd1b4716dd70bdd61cd647aa5a2e31ce20935a8938aba (image=quay.io/ceph/ceph:v18, name=elegant_newton, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 06:19:41 compute-0 podman[90495]: 2025-11-29 06:19:41.549467208 +0000 UTC m=+0.027150485 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 06:19:41 compute-0 podman[90495]: 2025-11-29 06:19:41.644160873 +0000 UTC m=+0.121844180 container attach e1f8a52f6ec7d42b84ebd1b4716dd70bdd61cd647aa5a2e31ce20935a8938aba (image=quay.io/ceph/ceph:v18, name=elegant_newton, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 29 06:19:41 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v110: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 06:19:42 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:19:42 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:19:42 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:19:42 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:19:42 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Nov 29 06:19:42 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1241390295' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 29 06:19:42 compute-0 elegant_newton[90511]: 
Nov 29 06:19:42 compute-0 elegant_newton[90511]: {"fsid":"336ec58c-893b-528f-a0c1-6ed1196bc047","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":14,"quorum":[0,1,2],"quorum_names":["compute-0","compute-2","compute-1"],"quorum_age":12,"monmap":{"epoch":3,"min_mon_release_name":"reef","num_mons":3},"osdmap":{"epoch":31,"num_osds":3,"num_up_osds":2,"osd_up_since":1764397129,"num_in_osds":3,"osd_in_since":1764397176,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":38}],"num_pgs":38,"num_pools":7,"num_objects":2,"data_bytes":459280,"bytes_used":56020992,"bytes_avail":14967975936,"bytes_total":15023996928},"fsmap":{"epoch":2,"id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-11-29T06:17:55.922038+0000","services":{}},"progress_events":{}}
Nov 29 06:19:42 compute-0 systemd[1]: libpod-e1f8a52f6ec7d42b84ebd1b4716dd70bdd61cd647aa5a2e31ce20935a8938aba.scope: Deactivated successfully.
Nov 29 06:19:42 compute-0 conmon[90511]: conmon e1f8a52f6ec7d42b84eb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e1f8a52f6ec7d42b84ebd1b4716dd70bdd61cd647aa5a2e31ce20935a8938aba.scope/container/memory.events
Nov 29 06:19:42 compute-0 podman[90495]: 2025-11-29 06:19:42.249037486 +0000 UTC m=+0.726720783 container died e1f8a52f6ec7d42b84ebd1b4716dd70bdd61cd647aa5a2e31ce20935a8938aba (image=quay.io/ceph/ceph:v18, name=elegant_newton, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 29 06:19:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-b5a343d228e3c0b5386b8c41f6ce98f9abff635d51d36911c476b714f1bf801a-merged.mount: Deactivated successfully.
Nov 29 06:19:42 compute-0 podman[90495]: 2025-11-29 06:19:42.30118031 +0000 UTC m=+0.778863577 container remove e1f8a52f6ec7d42b84ebd1b4716dd70bdd61cd647aa5a2e31ce20935a8938aba (image=quay.io/ceph/ceph:v18, name=elegant_newton, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 29 06:19:42 compute-0 systemd[1]: libpod-conmon-e1f8a52f6ec7d42b84ebd1b4716dd70bdd61cd647aa5a2e31ce20935a8938aba.scope: Deactivated successfully.
Nov 29 06:19:42 compute-0 sudo[90491]: pam_unix(sudo:session): session closed for user root
Nov 29 06:19:42 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e31 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 06:19:42 compute-0 sudo[90570]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zmoldqghqccljfvijsawapocejqqpsqs ; /usr/bin/python3'
Nov 29 06:19:42 compute-0 sudo[90570]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:19:42 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0) v1
Nov 29 06:19:42 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Nov 29 06:19:42 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 06:19:42 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:19:42 compute-0 ceph-mgr[74948]: [cephadm INFO cephadm.serve] Deploying daemon osd.2 on compute-2
Nov 29 06:19:42 compute-0 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Deploying daemon osd.2 on compute-2
Nov 29 06:19:42 compute-0 python3[90572]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mon dump --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:19:42 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 2.1b scrub starts
Nov 29 06:19:42 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 2.1b scrub ok
Nov 29 06:19:42 compute-0 podman[90573]: 2025-11-29 06:19:42.660747299 +0000 UTC m=+0.026502736 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 06:19:42 compute-0 podman[90573]: 2025-11-29 06:19:42.870041157 +0000 UTC m=+0.235796514 container create 6f37a8689d9ada5544b29c5ca0efdcb56624fe3d5899e1b0879258fc721a8c09 (image=quay.io/ceph/ceph:v18, name=great_galois, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 29 06:19:43 compute-0 systemd[1]: Started libpod-conmon-6f37a8689d9ada5544b29c5ca0efdcb56624fe3d5899e1b0879258fc721a8c09.scope.
Nov 29 06:19:43 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:19:43 compute-0 ceph-mon[74654]: pgmap v110: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 06:19:43 compute-0 ceph-mon[74654]: from='client.? 192.168.122.100:0/1241390295' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 29 06:19:43 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Nov 29 06:19:43 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:19:43 compute-0 ceph-mon[74654]: Deploying daemon osd.2 on compute-2
Nov 29 06:19:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48390325f80cc7148cc34765490bf46ec64eda6a60057e103ea9199acab1ad85/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:19:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48390325f80cc7148cc34765490bf46ec64eda6a60057e103ea9199acab1ad85/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:19:43 compute-0 ceph-mon[74654]: 2.1b scrub starts
Nov 29 06:19:43 compute-0 podman[90573]: 2025-11-29 06:19:43.252281357 +0000 UTC m=+0.618036694 container init 6f37a8689d9ada5544b29c5ca0efdcb56624fe3d5899e1b0879258fc721a8c09 (image=quay.io/ceph/ceph:v18, name=great_galois, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 06:19:43 compute-0 podman[90573]: 2025-11-29 06:19:43.262616023 +0000 UTC m=+0.628371370 container start 6f37a8689d9ada5544b29c5ca0efdcb56624fe3d5899e1b0879258fc721a8c09 (image=quay.io/ceph/ceph:v18, name=great_galois, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 29 06:19:43 compute-0 podman[90573]: 2025-11-29 06:19:43.266865179 +0000 UTC m=+0.632620506 container attach 6f37a8689d9ada5544b29c5ca0efdcb56624fe3d5899e1b0879258fc721a8c09 (image=quay.io/ceph/ceph:v18, name=great_galois, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 06:19:43 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 06:19:43 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/264614796' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 06:19:43 compute-0 great_galois[90589]: 
Nov 29 06:19:43 compute-0 great_galois[90589]: {"epoch":3,"fsid":"336ec58c-893b-528f-a0c1-6ed1196bc047","modified":"2025-11-29T06:19:24.108161Z","created":"2025-11-29T06:16:01.724679Z","min_mon_release":18,"min_mon_release_name":"reef","election_strategy":1,"disallowed_leaders: ":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks: ":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"],"optional":[]},"mons":[{"rank":0,"name":"compute-0","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.100:3300","nonce":0},{"type":"v1","addr":"192.168.122.100:6789","nonce":0}]},"addr":"192.168.122.100:6789/0","public_addr":"192.168.122.100:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":1,"name":"compute-2","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.102:3300","nonce":0},{"type":"v1","addr":"192.168.122.102:6789","nonce":0}]},"addr":"192.168.122.102:6789/0","public_addr":"192.168.122.102:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":2,"name":"compute-1","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.101:3300","nonce":0},{"type":"v1","addr":"192.168.122.101:6789","nonce":0}]},"addr":"192.168.122.101:6789/0","public_addr":"192.168.122.101:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0,1,2]}
Nov 29 06:19:43 compute-0 great_galois[90589]: dumped monmap epoch 3
Nov 29 06:19:43 compute-0 systemd[1]: libpod-6f37a8689d9ada5544b29c5ca0efdcb56624fe3d5899e1b0879258fc721a8c09.scope: Deactivated successfully.
Nov 29 06:19:43 compute-0 podman[90573]: 2025-11-29 06:19:43.899822424 +0000 UTC m=+1.265577741 container died 6f37a8689d9ada5544b29c5ca0efdcb56624fe3d5899e1b0879258fc721a8c09 (image=quay.io/ceph/ceph:v18, name=great_galois, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 06:19:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-48390325f80cc7148cc34765490bf46ec64eda6a60057e103ea9199acab1ad85-merged.mount: Deactivated successfully.
Nov 29 06:19:43 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v111: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 06:19:44 compute-0 podman[90573]: 2025-11-29 06:19:44.125978711 +0000 UTC m=+1.491734048 container remove 6f37a8689d9ada5544b29c5ca0efdcb56624fe3d5899e1b0879258fc721a8c09 (image=quay.io/ceph/ceph:v18, name=great_galois, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 29 06:19:44 compute-0 systemd[1]: libpod-conmon-6f37a8689d9ada5544b29c5ca0efdcb56624fe3d5899e1b0879258fc721a8c09.scope: Deactivated successfully.
Nov 29 06:19:44 compute-0 sudo[90570]: pam_unix(sudo:session): session closed for user root
Nov 29 06:19:44 compute-0 ceph-mon[74654]: 2.1b scrub ok
Nov 29 06:19:44 compute-0 ceph-mon[74654]: from='client.? 192.168.122.100:0/264614796' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 06:19:44 compute-0 sudo[90649]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qbvkuqxqainacyyrdsmwpdhocrkjotfe ; /usr/bin/python3'
Nov 29 06:19:44 compute-0 sudo[90649]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:19:44 compute-0 python3[90651]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth get client.openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:19:44 compute-0 podman[90652]: 2025-11-29 06:19:44.973406477 +0000 UTC m=+0.091247823 container create 7ee4283a0166078e6da25e5a4e986d8d1faa6a5c0e54ae69167c7c7256717049 (image=quay.io/ceph/ceph:v18, name=clever_raman, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 06:19:45 compute-0 podman[90652]: 2025-11-29 06:19:44.920012516 +0000 UTC m=+0.037853892 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 06:19:45 compute-0 systemd[1]: Started libpod-conmon-7ee4283a0166078e6da25e5a4e986d8d1faa6a5c0e54ae69167c7c7256717049.scope.
Nov 29 06:19:45 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:19:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/257171d7a576ba494360751dcf8a3dae1e48b33e73600a6f04bbcb2147f558b2/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:19:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/257171d7a576ba494360751dcf8a3dae1e48b33e73600a6f04bbcb2147f558b2/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:19:45 compute-0 podman[90652]: 2025-11-29 06:19:45.106310563 +0000 UTC m=+0.224151989 container init 7ee4283a0166078e6da25e5a4e986d8d1faa6a5c0e54ae69167c7c7256717049 (image=quay.io/ceph/ceph:v18, name=clever_raman, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 29 06:19:45 compute-0 podman[90652]: 2025-11-29 06:19:45.115392852 +0000 UTC m=+0.233234238 container start 7ee4283a0166078e6da25e5a4e986d8d1faa6a5c0e54ae69167c7c7256717049 (image=quay.io/ceph/ceph:v18, name=clever_raman, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 06:19:45 compute-0 podman[90652]: 2025-11-29 06:19:45.119204115 +0000 UTC m=+0.237045501 container attach 7ee4283a0166078e6da25e5a4e986d8d1faa6a5c0e54ae69167c7c7256717049 (image=quay.io/ceph/ceph:v18, name=clever_raman, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 06:19:45 compute-0 ceph-mon[74654]: pgmap v111: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 06:19:45 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.openstack"} v 0) v1
Nov 29 06:19:45 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2969688060' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Nov 29 06:19:45 compute-0 clever_raman[90667]: [client.openstack]
Nov 29 06:19:45 compute-0 clever_raman[90667]:         key = AQCBjyppAAAAABAAXQRTF6pnk4WV7TfvJo0Mjg==
Nov 29 06:19:45 compute-0 clever_raman[90667]:         caps mgr = "allow *"
Nov 29 06:19:45 compute-0 clever_raman[90667]:         caps mon = "profile rbd"
Nov 29 06:19:45 compute-0 clever_raman[90667]:         caps osd = "profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=images, profile rbd pool=cephfs.cephfs.meta, profile rbd pool=cephfs.cephfs.data"
Nov 29 06:19:45 compute-0 systemd[1]: libpod-7ee4283a0166078e6da25e5a4e986d8d1faa6a5c0e54ae69167c7c7256717049.scope: Deactivated successfully.
Nov 29 06:19:45 compute-0 podman[90652]: 2025-11-29 06:19:45.803018216 +0000 UTC m=+0.920859602 container died 7ee4283a0166078e6da25e5a4e986d8d1faa6a5c0e54ae69167c7c7256717049 (image=quay.io/ceph/ceph:v18, name=clever_raman, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 06:19:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-257171d7a576ba494360751dcf8a3dae1e48b33e73600a6f04bbcb2147f558b2-merged.mount: Deactivated successfully.
Nov 29 06:19:45 compute-0 podman[90652]: 2025-11-29 06:19:45.853269914 +0000 UTC m=+0.971111260 container remove 7ee4283a0166078e6da25e5a4e986d8d1faa6a5c0e54ae69167c7c7256717049 (image=quay.io/ceph/ceph:v18, name=clever_raman, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 06:19:45 compute-0 systemd[1]: libpod-conmon-7ee4283a0166078e6da25e5a4e986d8d1faa6a5c0e54ae69167c7c7256717049.scope: Deactivated successfully.
Nov 29 06:19:45 compute-0 sudo[90649]: pam_unix(sudo:session): session closed for user root
Nov 29 06:19:45 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v112: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 06:19:46 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 2.19 scrub starts
Nov 29 06:19:46 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 2.19 scrub ok
Nov 29 06:19:47 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 06:19:47 compute-0 ceph-mon[74654]: from='client.? 192.168.122.100:0/2969688060' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Nov 29 06:19:47 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:19:47 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 06:19:47 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:19:47 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e31 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 06:19:47 compute-0 sudo[90849]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xrhumjpbsyquzwntjmzdbtbmflanidtx ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764397186.9534473-37554-136998890228823/async_wrapper.py j985933889021 30 /home/zuul/.ansible/tmp/ansible-tmp-1764397186.9534473-37554-136998890228823/AnsiballZ_command.py _'
Nov 29 06:19:47 compute-0 sudo[90849]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:19:47 compute-0 ansible-async_wrapper.py[90851]: Invoked with j985933889021 30 /home/zuul/.ansible/tmp/ansible-tmp-1764397186.9534473-37554-136998890228823/AnsiballZ_command.py _
Nov 29 06:19:47 compute-0 ansible-async_wrapper.py[90854]: Starting module and watcher
Nov 29 06:19:47 compute-0 ansible-async_wrapper.py[90854]: Start watching 90855 (30)
Nov 29 06:19:47 compute-0 ansible-async_wrapper.py[90855]: Start module (90855)
Nov 29 06:19:47 compute-0 ansible-async_wrapper.py[90851]: Return async_wrapper task started.
Nov 29 06:19:47 compute-0 sudo[90849]: pam_unix(sudo:session): session closed for user root
Nov 29 06:19:47 compute-0 python3[90856]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:19:47 compute-0 podman[90857]: 2025-11-29 06:19:47.841096941 +0000 UTC m=+0.066157710 container create c7f7a71d0b69c9f8f780efb54aed962a6f24896619eab515c2671f4f57f4f9d5 (image=quay.io/ceph/ceph:v18, name=musing_volhard, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 06:19:47 compute-0 systemd[1]: Started libpod-conmon-c7f7a71d0b69c9f8f780efb54aed962a6f24896619eab515c2671f4f57f4f9d5.scope.
Nov 29 06:19:47 compute-0 podman[90857]: 2025-11-29 06:19:47.813199095 +0000 UTC m=+0.038259934 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 06:19:47 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:19:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/975ed0006c388b4abe469f1ba09f666a2bf5d31fb1027b794d60b5bd97427b0e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:19:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/975ed0006c388b4abe469f1ba09f666a2bf5d31fb1027b794d60b5bd97427b0e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:19:47 compute-0 podman[90857]: 2025-11-29 06:19:47.948371978 +0000 UTC m=+0.173432747 container init c7f7a71d0b69c9f8f780efb54aed962a6f24896619eab515c2671f4f57f4f9d5 (image=quay.io/ceph/ceph:v18, name=musing_volhard, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 06:19:47 compute-0 podman[90857]: 2025-11-29 06:19:47.958939711 +0000 UTC m=+0.184000470 container start c7f7a71d0b69c9f8f780efb54aed962a6f24896619eab515c2671f4f57f4f9d5 (image=quay.io/ceph/ceph:v18, name=musing_volhard, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 06:19:47 compute-0 podman[90857]: 2025-11-29 06:19:47.963076103 +0000 UTC m=+0.188136872 container attach c7f7a71d0b69c9f8f780efb54aed962a6f24896619eab515c2671f4f57f4f9d5 (image=quay.io/ceph/ceph:v18, name=musing_volhard, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 29 06:19:47 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v113: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 06:19:48 compute-0 ceph-mon[74654]: pgmap v112: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 06:19:48 compute-0 ceph-mon[74654]: 2.19 scrub starts
Nov 29 06:19:48 compute-0 ceph-mon[74654]: 2.19 scrub ok
Nov 29 06:19:48 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:19:48 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:19:48 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} v 0) v1
Nov 29 06:19:48 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Nov 29 06:19:48 compute-0 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.14301 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 29 06:19:48 compute-0 musing_volhard[90872]: 
Nov 29 06:19:48 compute-0 musing_volhard[90872]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Nov 29 06:19:48 compute-0 systemd[1]: libpod-c7f7a71d0b69c9f8f780efb54aed962a6f24896619eab515c2671f4f57f4f9d5.scope: Deactivated successfully.
Nov 29 06:19:48 compute-0 podman[90857]: 2025-11-29 06:19:48.566144343 +0000 UTC m=+0.791205152 container died c7f7a71d0b69c9f8f780efb54aed962a6f24896619eab515c2671f4f57f4f9d5 (image=quay.io/ceph/ceph:v18, name=musing_volhard, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 06:19:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-975ed0006c388b4abe469f1ba09f666a2bf5d31fb1027b794d60b5bd97427b0e-merged.mount: Deactivated successfully.
Nov 29 06:19:48 compute-0 podman[90857]: 2025-11-29 06:19:48.620273456 +0000 UTC m=+0.845334205 container remove c7f7a71d0b69c9f8f780efb54aed962a6f24896619eab515c2671f4f57f4f9d5 (image=quay.io/ceph/ceph:v18, name=musing_volhard, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 29 06:19:48 compute-0 systemd[1]: libpod-conmon-c7f7a71d0b69c9f8f780efb54aed962a6f24896619eab515c2671f4f57f4f9d5.scope: Deactivated successfully.
Nov 29 06:19:48 compute-0 ansible-async_wrapper.py[90855]: Module complete (90855)
Nov 29 06:19:48 compute-0 sudo[90953]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwrkxhckiuvkupsyunptdtceublkhqwg ; /usr/bin/python3'
Nov 29 06:19:48 compute-0 sudo[90953]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:19:48 compute-0 python3[90955]: ansible-ansible.legacy.async_status Invoked with jid=j985933889021.90851 mode=status _async_dir=/root/.ansible_async
Nov 29 06:19:48 compute-0 sudo[90953]: pam_unix(sudo:session): session closed for user root
Nov 29 06:19:49 compute-0 sudo[91002]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cozirkddiufbnutmfvcmlfqjarrfxeof ; /usr/bin/python3'
Nov 29 06:19:49 compute-0 sudo[91002]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:19:49 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e31 do_prune osdmap full prune enabled
Nov 29 06:19:49 compute-0 python3[91004]: ansible-ansible.legacy.async_status Invoked with jid=j985933889021.90851 mode=cleanup _async_dir=/root/.ansible_async
Nov 29 06:19:49 compute-0 sudo[91002]: pam_unix(sudo:session): session closed for user root
Nov 29 06:19:49 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Nov 29 06:19:49 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e32 e32: 3 total, 2 up, 3 in
Nov 29 06:19:49 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e32: 3 total, 2 up, 3 in
Nov 29 06:19:49 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 06:19:49 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 06:19:49 compute-0 ceph-mgr[74948]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 06:19:49 compute-0 ceph-mon[74654]: pgmap v113: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 06:19:49 compute-0 ceph-mon[74654]: from='osd.2 [v2:192.168.122.102:6800/60987518,v1:192.168.122.102:6801/60987518]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Nov 29 06:19:49 compute-0 ceph-mon[74654]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Nov 29 06:19:49 compute-0 ceph-mon[74654]: from='client.14301 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 29 06:19:49 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]} v 0) v1
Nov 29 06:19:49 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]: dispatch
Nov 29 06:19:49 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e32 create-or-move crush item name 'osd.2' initial_weight 0.0068000000000000005 at location {host=compute-2,root=default}
Nov 29 06:19:49 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 06:19:49 compute-0 sudo[91028]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nennzttxkcpothgivquvvguxdyjekjex ; /usr/bin/python3'
Nov 29 06:19:49 compute-0 sudo[91028]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:19:49 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:19:49 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 06:19:49 compute-0 python3[91030]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:19:49 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v115: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 06:19:50 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:19:50 compute-0 podman[91031]: 2025-11-29 06:19:50.030925692 +0000 UTC m=+0.066660365 container create a41224477c8337df61d75834ba2417a96a7b949c423ef29860c7170c16535da2 (image=quay.io/ceph/ceph:v18, name=sharp_mendeleev, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 29 06:19:50 compute-0 systemd[1]: Started libpod-conmon-a41224477c8337df61d75834ba2417a96a7b949c423ef29860c7170c16535da2.scope.
Nov 29 06:19:50 compute-0 podman[91031]: 2025-11-29 06:19:50.009700724 +0000 UTC m=+0.045435437 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 06:19:50 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:19:50 compute-0 sudo[91042]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:19:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d8dd212d30579e2e0be881b06a3185db80aaca3c31341bdfe7f7eba4046f2a4/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:19:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d8dd212d30579e2e0be881b06a3185db80aaca3c31341bdfe7f7eba4046f2a4/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:19:50 compute-0 sudo[91042]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:19:50 compute-0 sudo[91042]: pam_unix(sudo:session): session closed for user root
Nov 29 06:19:50 compute-0 podman[91031]: 2025-11-29 06:19:50.127613256 +0000 UTC m=+0.163347979 container init a41224477c8337df61d75834ba2417a96a7b949c423ef29860c7170c16535da2 (image=quay.io/ceph/ceph:v18, name=sharp_mendeleev, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 29 06:19:50 compute-0 podman[91031]: 2025-11-29 06:19:50.135796488 +0000 UTC m=+0.171531171 container start a41224477c8337df61d75834ba2417a96a7b949c423ef29860c7170c16535da2 (image=quay.io/ceph/ceph:v18, name=sharp_mendeleev, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 29 06:19:50 compute-0 podman[91031]: 2025-11-29 06:19:50.139288131 +0000 UTC m=+0.175022864 container attach a41224477c8337df61d75834ba2417a96a7b949c423ef29860c7170c16535da2 (image=quay.io/ceph/ceph:v18, name=sharp_mendeleev, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 06:19:50 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.ngsyhe started
Nov 29 06:19:50 compute-0 ceph-mgr[74948]: mgr.server handle_open ignoring open from mgr.compute-2.ngsyhe 192.168.122.102:0/708817067; not ready for session (expect reconnect)
Nov 29 06:19:50 compute-0 sudo[91076]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 06:19:50 compute-0 sudo[91076]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:19:50 compute-0 sudo[91076]: pam_unix(sudo:session): session closed for user root
Nov 29 06:19:50 compute-0 sudo[91120]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:19:50 compute-0 sudo[91120]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:19:50 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e32 do_prune osdmap full prune enabled
Nov 29 06:19:50 compute-0 sudo[91120]: pam_unix(sudo:session): session closed for user root
Nov 29 06:19:50 compute-0 ceph-mon[74654]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Nov 29 06:19:50 compute-0 ceph-mon[74654]: osdmap e32: 3 total, 2 up, 3 in
Nov 29 06:19:50 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 06:19:50 compute-0 ceph-mon[74654]: from='osd.2 [v2:192.168.122.102:6800/60987518,v1:192.168.122.102:6801/60987518]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]: dispatch
Nov 29 06:19:50 compute-0 ceph-mon[74654]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]: dispatch
Nov 29 06:19:50 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:19:50 compute-0 ceph-mon[74654]: pgmap v115: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 06:19:50 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:19:50 compute-0 ceph-mon[74654]: Standby manager daemon compute-2.ngsyhe started
Nov 29 06:19:50 compute-0 sudo[91145]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:19:50 compute-0 sudo[91145]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:19:50 compute-0 sudo[91145]: pam_unix(sudo:session): session closed for user root
Nov 29 06:19:50 compute-0 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.14307 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 29 06:19:50 compute-0 sharp_mendeleev[91070]: 
Nov 29 06:19:50 compute-0 sharp_mendeleev[91070]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Nov 29 06:19:50 compute-0 systemd[1]: libpod-a41224477c8337df61d75834ba2417a96a7b949c423ef29860c7170c16535da2.scope: Deactivated successfully.
Nov 29 06:19:50 compute-0 podman[91031]: 2025-11-29 06:19:50.744245037 +0000 UTC m=+0.779979750 container died a41224477c8337df61d75834ba2417a96a7b949c423ef29860c7170c16535da2 (image=quay.io/ceph/ceph:v18, name=sharp_mendeleev, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 06:19:50 compute-0 sudo[91170]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:19:50 compute-0 sudo[91170]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:19:50 compute-0 sudo[91170]: pam_unix(sudo:session): session closed for user root
Nov 29 06:19:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-3d8dd212d30579e2e0be881b06a3185db80aaca3c31341bdfe7f7eba4046f2a4-merged.mount: Deactivated successfully.
Nov 29 06:19:50 compute-0 podman[91031]: 2025-11-29 06:19:50.793948969 +0000 UTC m=+0.829683672 container remove a41224477c8337df61d75834ba2417a96a7b949c423ef29860c7170c16535da2 (image=quay.io/ceph/ceph:v18, name=sharp_mendeleev, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 29 06:19:50 compute-0 systemd[1]: libpod-conmon-a41224477c8337df61d75834ba2417a96a7b949c423ef29860c7170c16535da2.scope: Deactivated successfully.
Nov 29 06:19:50 compute-0 sudo[91028]: pam_unix(sudo:session): session closed for user root
Nov 29 06:19:50 compute-0 sudo[91204]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 06:19:50 compute-0 sudo[91204]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:19:51 compute-0 ceph-mgr[74948]: mgr.server handle_open ignoring open from mgr.compute-2.ngsyhe 192.168.122.102:0/708817067; not ready for session (expect reconnect)
Nov 29 06:19:51 compute-0 sudo[91204]: pam_unix(sudo:session): session closed for user root
Nov 29 06:19:51 compute-0 sudo[91287]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-enivjmgthfhdlkpgmjazdnhkpatblpfw ; /usr/bin/python3'
Nov 29 06:19:51 compute-0 sudo[91287]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:19:51 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 2.15 scrub starts
Nov 29 06:19:51 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 2.15 scrub ok
Nov 29 06:19:51 compute-0 python3[91289]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ls --export -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:19:51 compute-0 podman[91290]: 2025-11-29 06:19:51.779253787 +0000 UTC m=+0.060447181 container create 4552d2c149774d7d3b410958554570173d1c134864d6687f6c2b8789f59b291a (image=quay.io/ceph/ceph:v18, name=suspicious_haslett, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 06:19:51 compute-0 systemd[1]: Started libpod-conmon-4552d2c149774d7d3b410958554570173d1c134864d6687f6c2b8789f59b291a.scope.
Nov 29 06:19:51 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:19:51 compute-0 podman[91290]: 2025-11-29 06:19:51.758627417 +0000 UTC m=+0.039820841 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 06:19:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c98b6d5bf177e2118923c5249a829dca6ad9c7d95aa03f18ad0ff77446620d3/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:19:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c98b6d5bf177e2118923c5249a829dca6ad9c7d95aa03f18ad0ff77446620d3/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:19:51 compute-0 podman[91290]: 2025-11-29 06:19:51.871722976 +0000 UTC m=+0.152916470 container init 4552d2c149774d7d3b410958554570173d1c134864d6687f6c2b8789f59b291a (image=quay.io/ceph/ceph:v18, name=suspicious_haslett, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 29 06:19:51 compute-0 podman[91290]: 2025-11-29 06:19:51.882397452 +0000 UTC m=+0.163590886 container start 4552d2c149774d7d3b410958554570173d1c134864d6687f6c2b8789f59b291a (image=quay.io/ceph/ceph:v18, name=suspicious_haslett, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 06:19:51 compute-0 podman[91290]: 2025-11-29 06:19:51.886176144 +0000 UTC m=+0.167369578 container attach 4552d2c149774d7d3b410958554570173d1c134864d6687f6c2b8789f59b291a (image=quay.io/ceph/ceph:v18, name=suspicious_haslett, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 06:19:51 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v116: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 06:19:52 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]': finished
Nov 29 06:19:52 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e33 e33: 3 total, 2 up, 3 in
Nov 29 06:19:52 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e33: 3 total, 2 up, 3 in
Nov 29 06:19:52 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 06:19:52 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 06:19:52 compute-0 ceph-mgr[74948]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 06:19:52 compute-0 ceph-mon[74654]: from='client.14307 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 29 06:19:52 compute-0 ceph-mgr[74948]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/60987518; not ready for session (expect reconnect)
Nov 29 06:19:52 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.vxabpq(active, since 2m), standbys: compute-2.ngsyhe
Nov 29 06:19:52 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 06:19:52 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 06:19:52 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 06:19:52 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-2.ngsyhe", "id": "compute-2.ngsyhe"} v 0) v1
Nov 29 06:19:52 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "mgr metadata", "who": "compute-2.ngsyhe", "id": "compute-2.ngsyhe"}]: dispatch
Nov 29 06:19:52 compute-0 ceph-mgr[74948]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 06:19:52 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:19:52 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 06:19:52 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 33 pg[2.1b( empty local-lis/les=24/25 n=0 ec=17/12 lis/c=24/24 les/c/f=25/25/0 sis=33 pruub=13.564597130s) [] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 active pruub 90.237648010s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:19:52 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 33 pg[2.15( empty local-lis/les=24/25 n=0 ec=17/12 lis/c=24/24 les/c/f=25/25/0 sis=33 pruub=13.564558983s) [] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 active pruub 90.237670898s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:19:52 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 33 pg[2.1b( empty local-lis/les=24/25 n=0 ec=17/12 lis/c=24/24 les/c/f=25/25/0 sis=33 pruub=13.564597130s) [] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.237648010s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 06:19:52 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 33 pg[2.13( empty local-lis/les=24/25 n=0 ec=17/12 lis/c=24/24 les/c/f=25/25/0 sis=33 pruub=13.564517021s) [] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 active pruub 90.237731934s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:19:52 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 33 pg[2.10( empty local-lis/les=24/25 n=0 ec=17/12 lis/c=24/24 les/c/f=25/25/0 sis=33 pruub=13.564764023s) [] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 active pruub 90.238029480s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:19:52 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 33 pg[2.13( empty local-lis/les=24/25 n=0 ec=17/12 lis/c=24/24 les/c/f=25/25/0 sis=33 pruub=13.564517021s) [] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.237731934s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 06:19:52 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 33 pg[2.10( empty local-lis/les=24/25 n=0 ec=17/12 lis/c=24/24 les/c/f=25/25/0 sis=33 pruub=13.564764023s) [] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.238029480s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 06:19:52 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 33 pg[3.0( empty local-lis/les=14/15 n=0 ec=14/14 lis/c=14/14 les/c/f=15/15/0 sis=33 pruub=15.724649429s) [] r=-1 lpr=33 pi=[14,33)/1 crt=0'0 mlcod 0'0 active pruub 92.398216248s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:19:52 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 33 pg[2.15( empty local-lis/les=24/25 n=0 ec=17/12 lis/c=24/24 les/c/f=25/25/0 sis=33 pruub=13.564558983s) [] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.237670898s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 06:19:52 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 33 pg[5.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=33 pruub=12.555711746s) [] r=-1 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 active pruub 89.229385376s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:19:52 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 33 pg[3.0( empty local-lis/les=14/15 n=0 ec=14/14 lis/c=14/14 les/c/f=15/15/0 sis=33 pruub=15.724649429s) [] r=-1 lpr=33 pi=[14,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 92.398216248s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 06:19:52 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 33 pg[5.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=33 pruub=12.555711746s) [] r=-1 lpr=33 pi=[18,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 89.229385376s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 06:19:52 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 33 pg[2.c( empty local-lis/les=24/25 n=0 ec=17/12 lis/c=24/24 les/c/f=25/25/0 sis=33 pruub=13.569536209s) [] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 active pruub 90.243316650s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:19:52 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 33 pg[2.d( empty local-lis/les=24/25 n=0 ec=17/12 lis/c=24/24 les/c/f=25/25/0 sis=33 pruub=13.569481850s) [] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 active pruub 90.243385315s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:19:52 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 33 pg[2.a( empty local-lis/les=24/25 n=0 ec=17/12 lis/c=24/24 les/c/f=25/25/0 sis=33 pruub=13.563738823s) [] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 active pruub 90.237670898s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:19:52 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 33 pg[2.c( empty local-lis/les=24/25 n=0 ec=17/12 lis/c=24/24 les/c/f=25/25/0 sis=33 pruub=13.569536209s) [] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.243316650s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 06:19:52 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 33 pg[2.a( empty local-lis/les=24/25 n=0 ec=17/12 lis/c=24/24 les/c/f=25/25/0 sis=33 pruub=13.563738823s) [] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.237670898s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 06:19:52 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 33 pg[2.d( empty local-lis/les=24/25 n=0 ec=17/12 lis/c=24/24 les/c/f=25/25/0 sis=33 pruub=13.569481850s) [] r=-1 lpr=33 pi=[24,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.243385315s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 06:19:52 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:19:52 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e33 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 06:19:52 compute-0 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.14313 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 29 06:19:52 compute-0 suspicious_haslett[91305]: 
Nov 29 06:19:52 compute-0 suspicious_haslett[91305]: [{"placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash"}, {"placement": {"count": 2}, "service_id": "rgw.default", "service_name": "ingress.rgw.default", "service_type": "ingress", "spec": {"backend_service": "rgw.rgw", "first_virtual_router_id": 50, "frontend_port": 8080, "monitor_port": 8999, "virtual_interface_networks": ["192.168.122.0/24"], "virtual_ip": "192.168.122.2/24"}}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "cephfs", "service_name": "mds.cephfs", "service_type": "mds"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_name": "mgr", "service_type": "mgr"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_name": "mon", "service_type": "mon"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "default_drive_group", "service_name": "osd.default_drive_group", "service_type": "osd", "spec": {"data_devices": {"paths": ["/dev/ceph_vg0/ceph_lv0"]}, "filter_logic": "AND", "objectstore": "bluestore"}}, {"networks": ["192.168.122.0/24"], "placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "rgw", "service_name": "rgw.rgw", "service_type": "rgw", "spec": {"rgw_frontend_port": 8082}}]
Nov 29 06:19:52 compute-0 systemd[1]: libpod-4552d2c149774d7d3b410958554570173d1c134864d6687f6c2b8789f59b291a.scope: Deactivated successfully.
Nov 29 06:19:52 compute-0 podman[91290]: 2025-11-29 06:19:52.551060664 +0000 UTC m=+0.832254098 container died 4552d2c149774d7d3b410958554570173d1c134864d6687f6c2b8789f59b291a (image=quay.io/ceph/ceph:v18, name=suspicious_haslett, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 29 06:19:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-5c98b6d5bf177e2118923c5249a829dca6ad9c7d95aa03f18ad0ff77446620d3-merged.mount: Deactivated successfully.
Nov 29 06:19:52 compute-0 ansible-async_wrapper.py[90854]: Done in kid B.
Nov 29 06:19:52 compute-0 podman[91290]: 2025-11-29 06:19:52.6012112 +0000 UTC m=+0.882404594 container remove 4552d2c149774d7d3b410958554570173d1c134864d6687f6c2b8789f59b291a (image=quay.io/ceph/ceph:v18, name=suspicious_haslett, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 29 06:19:52 compute-0 systemd[1]: libpod-conmon-4552d2c149774d7d3b410958554570173d1c134864d6687f6c2b8789f59b291a.scope: Deactivated successfully.
Nov 29 06:19:52 compute-0 sudo[91287]: pam_unix(sudo:session): session closed for user root
Nov 29 06:19:52 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.gaxpay started
Nov 29 06:19:52 compute-0 ceph-mgr[74948]: mgr.server handle_open ignoring open from mgr.compute-1.gaxpay 192.168.122.101:0/1611816633; not ready for session (expect reconnect)
Nov 29 06:19:53 compute-0 ceph-mgr[74948]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/60987518; not ready for session (expect reconnect)
Nov 29 06:19:53 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 06:19:53 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 06:19:53 compute-0 ceph-mgr[74948]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 06:19:53 compute-0 sudo[91366]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-blbumgnswyrrhelwnepekkcijtjotujb ; /usr/bin/python3'
Nov 29 06:19:53 compute-0 sudo[91366]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:19:53 compute-0 python3[91368]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:19:53 compute-0 ceph-mon[74654]: purged_snaps scrub starts
Nov 29 06:19:53 compute-0 ceph-mon[74654]: purged_snaps scrub ok
Nov 29 06:19:53 compute-0 ceph-mon[74654]: 2.15 scrub starts
Nov 29 06:19:53 compute-0 ceph-mon[74654]: 2.15 scrub ok
Nov 29 06:19:53 compute-0 ceph-mon[74654]: pgmap v116: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 06:19:53 compute-0 ceph-mon[74654]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]': finished
Nov 29 06:19:53 compute-0 ceph-mon[74654]: osdmap e33: 3 total, 2 up, 3 in
Nov 29 06:19:53 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 06:19:53 compute-0 ceph-mon[74654]: mgrmap e9: compute-0.vxabpq(active, since 2m), standbys: compute-2.ngsyhe
Nov 29 06:19:53 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 06:19:53 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "mgr metadata", "who": "compute-2.ngsyhe", "id": "compute-2.ngsyhe"}]: dispatch
Nov 29 06:19:53 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:19:53 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:19:53 compute-0 ceph-mon[74654]: from='client.14313 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 29 06:19:53 compute-0 ceph-mon[74654]: Standby manager daemon compute-1.gaxpay started
Nov 29 06:19:53 compute-0 ceph-mgr[74948]: mgr.server handle_open ignoring open from mgr.compute-1.gaxpay 192.168.122.101:0/1611816633; not ready for session (expect reconnect)
Nov 29 06:19:53 compute-0 podman[91369]: 2025-11-29 06:19:53.688405537 +0000 UTC m=+0.052143345 container create 6e83b1fcc0594a647b6421859174a78cf577854ce76f5d8ca5da7044f2d8dfdf (image=quay.io/ceph/ceph:v18, name=amazing_grothendieck, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 06:19:53 compute-0 systemd[1]: Started libpod-conmon-6e83b1fcc0594a647b6421859174a78cf577854ce76f5d8ca5da7044f2d8dfdf.scope.
Nov 29 06:19:53 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:19:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35ee37f26dc72644b76380fe37fc403ea5b6aee28bf5b27375ced2c52dd5b277/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:19:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35ee37f26dc72644b76380fe37fc403ea5b6aee28bf5b27375ced2c52dd5b277/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:19:53 compute-0 podman[91369]: 2025-11-29 06:19:53.673477465 +0000 UTC m=+0.037215293 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 06:19:53 compute-0 podman[91369]: 2025-11-29 06:19:53.780993899 +0000 UTC m=+0.144731747 container init 6e83b1fcc0594a647b6421859174a78cf577854ce76f5d8ca5da7044f2d8dfdf (image=quay.io/ceph/ceph:v18, name=amazing_grothendieck, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 29 06:19:53 compute-0 podman[91369]: 2025-11-29 06:19:53.787972956 +0000 UTC m=+0.151710804 container start 6e83b1fcc0594a647b6421859174a78cf577854ce76f5d8ca5da7044f2d8dfdf (image=quay.io/ceph/ceph:v18, name=amazing_grothendieck, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 29 06:19:53 compute-0 podman[91369]: 2025-11-29 06:19:53.791566942 +0000 UTC m=+0.155304750 container attach 6e83b1fcc0594a647b6421859174a78cf577854ce76f5d8ca5da7044f2d8dfdf (image=quay.io/ceph/ceph:v18, name=amazing_grothendieck, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507)
Nov 29 06:19:53 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v118: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 06:19:54 compute-0 ceph-mgr[74948]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/60987518; not ready for session (expect reconnect)
Nov 29 06:19:54 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 06:19:54 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 06:19:54 compute-0 ceph-mgr[74948]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 06:19:54 compute-0 ceph-mgr[74948]: [balancer INFO root] Optimize plan auto_2025-11-29_06:19:54
Nov 29 06:19:54 compute-0 ceph-mgr[74948]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 06:19:54 compute-0 ceph-mgr[74948]: [balancer INFO root] do_upmap
Nov 29 06:19:54 compute-0 ceph-mgr[74948]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.data', 'backups', 'images', 'cephfs.cephfs.meta', 'volumes', '.mgr']
Nov 29 06:19:54 compute-0 ceph-mgr[74948]: [balancer INFO root] prepared 0/10 changes
Nov 29 06:19:54 compute-0 ceph-mgr[74948]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 06:19:54 compute-0 ceph-mgr[74948]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 06:19:54 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 06:19:54 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 06:19:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:19:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:19:54 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 06:19:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:19:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:19:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:19:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:19:54 compute-0 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.14319 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 29 06:19:54 compute-0 amazing_grothendieck[91384]: 
Nov 29 06:19:54 compute-0 amazing_grothendieck[91384]: [{"container_id": "47d65a8aff6f", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "0.56%", "created": "2025-11-29T06:17:23.040678Z", "daemon_id": "compute-0", "daemon_name": "crash.compute-0", "daemon_type": "crash", "events": ["2025-11-29T06:17:23.103806Z daemon:crash.compute-0 [INFO] \"Deployed crash.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-29T06:18:39.714127Z", "memory_usage": 11618222, "ports": [], "service_name": "crash", "started": "2025-11-29T06:17:22.908791Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-336ec58c-893b-528f-a0c1-6ed1196bc047@crash.compute-0", "version": "18.2.7"}, {"container_id": "4384fb97959c", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "0.65%", "created": "2025-11-29T06:18:18.466170Z", "daemon_id": "compute-1", "daemon_name": "crash.compute-1", "daemon_type": "crash", "events": ["2025-11-29T06:18:18.510330Z daemon:crash.compute-1 [INFO] \"Deployed crash.compute-1 on host 'compute-1'\""], "hostname": "compute-1", "is_active": false, "last_refresh": "2025-11-29T06:19:52.114424Z", "memory_usage": 11785994, "ports": [], "service_name": "crash", "started": "2025-11-29T06:18:18.373501Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-336ec58c-893b-528f-a0c1-6ed1196bc047@crash.compute-1", "version": "18.2.7"}, {"daemon_id": "compute-2", "daemon_name": "crash.compute-2", "daemon_type": "crash", "events": ["2025-11-29T06:19:34.246990Z daemon:crash.compute-2 [INFO] \"Deployed crash.compute-2 on host 'compute-2'\""], "hostname": "compute-2", "is_active": false, "ports": [], "service_name": "crash", "status": 2, "status_desc": "starting"}, {"container_id": "6f81410254a7", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph:v18", "cpu_percentage": "33.75%", "created": "2025-11-29T06:16:09.231591Z", "daemon_id": "compute-0.vxabpq", "daemon_name": "mgr.compute-0.vxabpq", "daemon_type": "mgr", "events": ["2025-11-29T06:17:28.682807Z daemon:mgr.compute-0.vxabpq [INFO] \"Reconfigured mgr.compute-0.vxabpq on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-29T06:18:39.713992Z", "memory_usage": 548510105, "ports": [9283, 8765], "service_name": "mgr", "started": "2025-11-29T06:16:09.091594Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-336ec58c-893b-528f-a0c1-6ed1196bc047@mgr.compute-0.vxabpq", "version": "18.2.7"}, {"container_id": "a8b9f68ee8f2", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "100.00%", "created": "2025-11-29T06:19:31.639791Z", "daemon_id": "compute-1.gaxpay", "daemon_name": "mgr.compute-1.gaxpay", "daemon_type": "mgr", "events": ["2025-11-29T06:19:31.709129Z daemon:mgr.compute-1.gaxpay [INFO] \"Deployed mgr.compute-1.gaxpay on host 'compute-1'\""], "hostname": "compute-1", "is_active": false, "last_refresh": "2025-11-29T06:19:52.114673Z", "memory_usage": 484546969, "ports": [8765], "service_name": "mgr", "started": "2025-11-29T06:19:31.500255Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-336ec58c-893b-528f-a0c1-6ed1196bc047@mgr.compute-1.gaxpay", "version": "18.2.7"}, {"daemon_id": "compute-2.ngsyhe", "daemon_name": "mgr.compute-2.ngsyhe", "daemon_type": "mgr", "events": ["2025-11-29T06:19:29.510673Z daemon:mgr.compute-2.ngsyhe [INFO] \"Deployed mgr.compute-2.ngsyhe on host 'compute-2'\""], "hostname": "compute-2", "is_active": false, "ports": [8765], "service_name": "mgr", "status": 2, "status_desc": "starting"}, {"container_id": "c3c8680245c6", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph:v18", "cpu_percentage": "1.49%", "created": "2025-11-29T06:16:03.846438Z", "daemon_id": "compute-0", "daemon_name": "mon.compute-0", "daemon_type": "mon", "events": ["2025-11-29T06:17:27.545002Z daemon:mon.compute-0 [INFO] \"Reconfigured mon.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-29T06:18:39.713776Z", "memory_request": 2147483648, "memory_usage": 35316039, "ports": [], "service_name": "mon", "started": "2025-11-29T06:16:06.829437Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-336ec58c-893b-528f-a0c1-6ed1196bc047@mon.compute-0", "version": "18.2.7"}, {"container_id": "6c6562254e3e", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "1.69%", "created": "2025-11-29T06:19:21.742553Z", "daemon_id": "compute-1", "daemon_name": "mon.compute-1", "daemon_type": "mon", "events": ["2025-11-29T06:19:23.913168Z daemon:mon.compute-1 [INFO] \"Deployed mon.compute-1 on host 'compute-1'\""], "hostname": "compute-1", "is_active": false, "last_refresh": "2025-11-29T06:19:52.114609Z", "memory_request": 2147483648, "memory_usage": 28280094, "ports": [], "service_name": "mon", "started": "2025-11-29T06:19:21.606193Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-336ec58c-893b-528f-a0c1-6ed1196bc047@mon.compute-1", "version": "18.2.7"}, {"daemon_id": "compute-2", "daemon_name": "mon.compute-2", "daemon_type": "mon", "events": ["2025-11-29T06:19:18.671495Z daemon:mon.compute-2 [INFO] \"Deployed mon.compute-2 on host 'compute-2'\""], "hostname": "compute-2", "is_active": false, "memory_request": 2147483648, "ports": [], "service_name": "mon", "status": 2, "status_desc": "starting"}, {"container_id": "aaeeb4acbe44", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "7.17%", "created": "2025-11-29T06:18:33.440691Z", "daemon_id": "1", "daemon_name": "osd.1", "daemon_type": "osd", "events": ["2025-11-29T06:18:33.850035Z daemon:osd.1 [INFO] \"Deployed osd.1 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-29T06:18:39.714255Z", "memory_request": 4294967296, "memory_usage": 32495370, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-11-29T06:18:32.990355Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-336ec58c-893b-528f-a0c1-6ed1196bc047@osd.1", "version": "18.2.7"}, {"container_id": "142ead126c9a", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "2.01%", "created": "2025-11-29T06:18:34.731541Z", "daemon_id": "0", "daemon_name": "osd.0", "daemon_type": "osd", "events": ["2025-11-29T06:18:35.070740Z daemon:osd.0 [INFO] \"Deployed osd.0 on host 'compute-1'\""], "hostname": "compute-1", "is_active": false, "last_refresh": "2025-11-29T06:19:52.114544Z", "memory_request": 5502923980, "memory_usage": 59506688, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-11-29T06:18:34.586329Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-336ec58c-893b-528f-a0c1-6ed1196bc047@osd.0", "version": "18.2.7"}, {"daemon_id": "2", "daemon_name": "osd.2", "daemon_type": "osd", "events": ["2025-11-29T06:19:47.255939Z daemon:osd.2 [INFO] \"Deployed osd.2 on host 'compute-2'\""], "hostname": "compute-2", "is_active": false, "memory_request": 4294967296, "ports": [], "service_name": "osd.default_drive_group", "status": 2, "status_desc": "starting"}]
Nov 29 06:19:54 compute-0 systemd[1]: libpod-6e83b1fcc0594a647b6421859174a78cf577854ce76f5d8ca5da7044f2d8dfdf.scope: Deactivated successfully.
Nov 29 06:19:54 compute-0 podman[91369]: 2025-11-29 06:19:54.311479149 +0000 UTC m=+0.675216997 container died 6e83b1fcc0594a647b6421859174a78cf577854ce76f5d8ca5da7044f2d8dfdf (image=quay.io/ceph/ceph:v18, name=amazing_grothendieck, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 29 06:19:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-35ee37f26dc72644b76380fe37fc403ea5b6aee28bf5b27375ced2c52dd5b277-merged.mount: Deactivated successfully.
Nov 29 06:19:54 compute-0 podman[91369]: 2025-11-29 06:19:54.462049108 +0000 UTC m=+0.825786956 container remove 6e83b1fcc0594a647b6421859174a78cf577854ce76f5d8ca5da7044f2d8dfdf (image=quay.io/ceph/ceph:v18, name=amazing_grothendieck, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True)
Nov 29 06:19:54 compute-0 systemd[1]: libpod-conmon-6e83b1fcc0594a647b6421859174a78cf577854ce76f5d8ca5da7044f2d8dfdf.scope: Deactivated successfully.
Nov 29 06:19:54 compute-0 sudo[91366]: pam_unix(sudo:session): session closed for user root
Nov 29 06:19:54 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 06:19:54 compute-0 rsyslogd[1007]: message too long (9871) with configured size 8096, begin of message is: [{"container_id": "47d65a8aff6f", "container_image_digests": ["quay.io/ceph/ceph [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Nov 29 06:19:54 compute-0 ceph-mgr[74948]: mgr.server handle_open ignoring open from mgr.compute-1.gaxpay 192.168.122.101:0/1611816633; not ready for session (expect reconnect)
Nov 29 06:19:54 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 06:19:54 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 15023996928
Nov 29 06:19:54 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.009242174413735343 quantized to 1 (current 1)
Nov 29 06:19:54 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 15023996928
Nov 29 06:19:54 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:19:54 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 15023996928
Nov 29 06:19:54 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 29 06:19:54 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 15023996928
Nov 29 06:19:54 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 29 06:19:54 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 15023996928
Nov 29 06:19:54 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 29 06:19:54 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 15023996928
Nov 29 06:19:54 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0 of space, bias 4.0, pg target 0.0 quantized to 16 (current 1)
Nov 29 06:19:54 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 15023996928
Nov 29 06:19:54 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 29 06:19:54 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} v 0) v1
Nov 29 06:19:54 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 06:19:54 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e33 do_prune osdmap full prune enabled
Nov 29 06:19:55 compute-0 ceph-mgr[74948]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/60987518; not ready for session (expect reconnect)
Nov 29 06:19:55 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 06:19:55 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 06:19:55 compute-0 ceph-mgr[74948]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 06:19:55 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : mgrmap e10: compute-0.vxabpq(active, since 3m), standbys: compute-2.ngsyhe, compute-1.gaxpay
Nov 29 06:19:55 compute-0 sudo[91442]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-azbwicpohdkwzqrjnvxkwydhkeweslxz ; /usr/bin/python3'
Nov 29 06:19:55 compute-0 sudo[91442]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:19:55 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-1.gaxpay", "id": "compute-1.gaxpay"} v 0) v1
Nov 29 06:19:55 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "mgr metadata", "who": "compute-1.gaxpay", "id": "compute-1.gaxpay"}]: dispatch
Nov 29 06:19:55 compute-0 python3[91444]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   -s -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:19:55 compute-0 podman[91445]: 2025-11-29 06:19:55.57748845 +0000 UTC m=+0.028154464 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 06:19:55 compute-0 podman[91445]: 2025-11-29 06:19:55.707537641 +0000 UTC m=+0.158203665 container create 21a6a135e583520ca524f3413fbafc8866dac01ae16edbc46a1be1570ffef6e5 (image=quay.io/ceph/ceph:v18, name=condescending_brahmagupta, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 29 06:19:55 compute-0 systemd[1]: Started libpod-conmon-21a6a135e583520ca524f3413fbafc8866dac01ae16edbc46a1be1570ffef6e5.scope.
Nov 29 06:19:55 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:19:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c44c2c382eadcb5cc0ae05955944d10323ff3f14a059193b8caf814a6f1b6c3b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:19:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c44c2c382eadcb5cc0ae05955944d10323ff3f14a059193b8caf814a6f1b6c3b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:19:55 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v119: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 06:19:56 compute-0 ceph-mgr[74948]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/60987518; not ready for session (expect reconnect)
Nov 29 06:19:56 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 06:19:56 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 06:19:56 compute-0 ceph-mgr[74948]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 06:19:56 compute-0 podman[91445]: 2025-11-29 06:19:56.117955386 +0000 UTC m=+0.568621450 container init 21a6a135e583520ca524f3413fbafc8866dac01ae16edbc46a1be1570ffef6e5 (image=quay.io/ceph/ceph:v18, name=condescending_brahmagupta, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 06:19:56 compute-0 podman[91445]: 2025-11-29 06:19:56.128627452 +0000 UTC m=+0.579293466 container start 21a6a135e583520ca524f3413fbafc8866dac01ae16edbc46a1be1570ffef6e5 (image=quay.io/ceph/ceph:v18, name=condescending_brahmagupta, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 06:19:56 compute-0 podman[91445]: 2025-11-29 06:19:56.341029552 +0000 UTC m=+0.791695616 container attach 21a6a135e583520ca524f3413fbafc8866dac01ae16edbc46a1be1570ffef6e5 (image=quay.io/ceph/ceph:v18, name=condescending_brahmagupta, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 06:19:56 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Nov 29 06:19:56 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e34 e34: 3 total, 2 up, 3 in
Nov 29 06:19:56 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 06:19:56 compute-0 ceph-mon[74654]: pgmap v118: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 06:19:56 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 06:19:56 compute-0 ceph-mon[74654]: from='client.14319 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 29 06:19:56 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:19:56 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e34: 3 total, 2 up, 3 in
Nov 29 06:19:56 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 06:19:56 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 06:19:56 compute-0 ceph-mgr[74948]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 06:19:56 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 06:19:56 compute-0 ceph-mgr[74948]: [progress INFO root] update: starting ev f17a2b4e-8ac5-45c2-afc8-67a9786cff10 (PG autoscaler increasing pool 3 PGs from 1 to 32)
Nov 29 06:19:56 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} v 0) v1
Nov 29 06:19:56 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 06:19:56 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Nov 29 06:19:56 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4274267034' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 29 06:19:56 compute-0 condescending_brahmagupta[91460]: 
Nov 29 06:19:56 compute-0 condescending_brahmagupta[91460]: {"fsid":"336ec58c-893b-528f-a0c1-6ed1196bc047","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":14,"quorum":[0,1,2],"quorum_names":["compute-0","compute-2","compute-1"],"quorum_age":27,"monmap":{"epoch":3,"min_mon_release_name":"reef","num_mons":3},"osdmap":{"epoch":34,"num_osds":3,"num_up_osds":2,"osd_up_since":1764397129,"num_in_osds":3,"osd_in_since":1764397176,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":38}],"num_pgs":38,"num_pools":7,"num_objects":2,"data_bytes":459280,"bytes_used":56037376,"bytes_avail":14967959552,"bytes_total":15023996928},"fsmap":{"epoch":2,"id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":2,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":4,"modified":"2025-11-29T06:19:51.975610+0000","services":{"mon":{"daemons":{"summary":"","compute-1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{}}
Nov 29 06:19:56 compute-0 systemd[1]: libpod-21a6a135e583520ca524f3413fbafc8866dac01ae16edbc46a1be1570ffef6e5.scope: Deactivated successfully.
Nov 29 06:19:56 compute-0 podman[91445]: 2025-11-29 06:19:56.816156923 +0000 UTC m=+1.266822907 container died 21a6a135e583520ca524f3413fbafc8866dac01ae16edbc46a1be1570ffef6e5 (image=quay.io/ceph/ceph:v18, name=condescending_brahmagupta, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 29 06:19:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-c44c2c382eadcb5cc0ae05955944d10323ff3f14a059193b8caf814a6f1b6c3b-merged.mount: Deactivated successfully.
Nov 29 06:19:57 compute-0 podman[91445]: 2025-11-29 06:19:57.03758012 +0000 UTC m=+1.488246094 container remove 21a6a135e583520ca524f3413fbafc8866dac01ae16edbc46a1be1570ffef6e5 (image=quay.io/ceph/ceph:v18, name=condescending_brahmagupta, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 06:19:57 compute-0 systemd[1]: libpod-conmon-21a6a135e583520ca524f3413fbafc8866dac01ae16edbc46a1be1570ffef6e5.scope: Deactivated successfully.
Nov 29 06:19:57 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:19:57 compute-0 sudo[91442]: pam_unix(sudo:session): session closed for user root
Nov 29 06:19:57 compute-0 ceph-mgr[74948]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/60987518; not ready for session (expect reconnect)
Nov 29 06:19:57 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 06:19:57 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 06:19:57 compute-0 ceph-mgr[74948]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 06:19:57 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e34 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 06:19:57 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e34 do_prune osdmap full prune enabled
Nov 29 06:19:57 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v121: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 06:19:57 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 29 06:19:57 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 06:19:57 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 06:19:57 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 06:19:57 compute-0 ceph-mon[74654]: mgrmap e10: compute-0.vxabpq(active, since 3m), standbys: compute-2.ngsyhe, compute-1.gaxpay
Nov 29 06:19:57 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "mgr metadata", "who": "compute-1.gaxpay", "id": "compute-1.gaxpay"}]: dispatch
Nov 29 06:19:57 compute-0 ceph-mon[74654]: pgmap v119: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 06:19:57 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 06:19:57 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Nov 29 06:19:57 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:19:57 compute-0 ceph-mon[74654]: osdmap e34: 3 total, 2 up, 3 in
Nov 29 06:19:57 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 06:19:57 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 06:19:57 compute-0 ceph-mon[74654]: from='client.? 192.168.122.100:0/4274267034' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 29 06:19:57 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:19:57 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 06:19:58 compute-0 ceph-mgr[74948]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/60987518; not ready for session (expect reconnect)
Nov 29 06:19:58 compute-0 sudo[91521]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ehwvkptukwycrqbrlgosmuofwzcaltio ; /usr/bin/python3'
Nov 29 06:19:58 compute-0 sudo[91521]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:19:58 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 06:19:58 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 06:19:58 compute-0 ceph-mgr[74948]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 06:19:58 compute-0 python3[91523]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config dump -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:19:58 compute-0 podman[91524]: 2025-11-29 06:19:58.728520695 +0000 UTC m=+0.112781741 container create 29d3dddc1081012a762c26914632085030db3e154fa749037797d63e7e01d494 (image=quay.io/ceph/ceph:v18, name=upbeat_engelbart, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 06:19:58 compute-0 podman[91524]: 2025-11-29 06:19:58.653239756 +0000 UTC m=+0.037500882 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 06:19:58 compute-0 systemd[1]: Started libpod-conmon-29d3dddc1081012a762c26914632085030db3e154fa749037797d63e7e01d494.scope.
Nov 29 06:19:58 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:19:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8050e8cfa258648331c7f3a59d11576970b7f392272ea046a587b3dd5cec24ec/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:19:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8050e8cfa258648331c7f3a59d11576970b7f392272ea046a587b3dd5cec24ec/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:19:58 compute-0 podman[91524]: 2025-11-29 06:19:58.897386866 +0000 UTC m=+0.281647922 container init 29d3dddc1081012a762c26914632085030db3e154fa749037797d63e7e01d494 (image=quay.io/ceph/ceph:v18, name=upbeat_engelbart, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 06:19:58 compute-0 podman[91524]: 2025-11-29 06:19:58.904349853 +0000 UTC m=+0.288610889 container start 29d3dddc1081012a762c26914632085030db3e154fa749037797d63e7e01d494 (image=quay.io/ceph/ceph:v18, name=upbeat_engelbart, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 06:19:58 compute-0 podman[91524]: 2025-11-29 06:19:58.93263068 +0000 UTC m=+0.316891746 container attach 29d3dddc1081012a762c26914632085030db3e154fa749037797d63e7e01d494 (image=quay.io/ceph/ceph:v18, name=upbeat_engelbart, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 06:19:59 compute-0 ceph-mgr[74948]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/60987518; not ready for session (expect reconnect)
Nov 29 06:19:59 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 06:19:59 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 06:19:59 compute-0 ceph-mgr[74948]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 06:19:59 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Nov 29 06:19:59 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e35 e35: 3 total, 2 up, 3 in
Nov 29 06:19:59 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e35: 3 total, 2 up, 3 in
Nov 29 06:19:59 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 06:19:59 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 06:19:59 compute-0 ceph-mgr[74948]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 06:19:59 compute-0 ceph-mgr[74948]: [progress INFO root] update: starting ev 67b0cd5d-139a-461d-8d6d-720f496a076f (PG autoscaler increasing pool 4 PGs from 1 to 32)
Nov 29 06:19:59 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} v 0) v1
Nov 29 06:19:59 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 06:19:59 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 29 06:19:59 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2162770432' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 29 06:19:59 compute-0 upbeat_engelbart[91539]: 
Nov 29 06:19:59 compute-0 upbeat_engelbart[91539]: [{"section":"global","name":"cluster_network","value":"172.20.0.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"container_image","value":"quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"log_to_file","value":"true","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"global","name":"mon_cluster_log_to_file","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv4","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv6","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"osd_pool_default_size","value":"1","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"public_network","value":"192.168.122.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_admin_roles","value":"ResellerAdmin, swiftoperator","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_roles","value":"member, Member, admin","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_domain","value":"default","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_password","value":"12345678","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_project","value":"service","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_user","value":"swift","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_api_version","value":"3","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_keystone_implicit_tenants","value":"true","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_url","value":"https://keystone-internal.openstack.svc:5000","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_verify_ssl","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_name_len","value":"128","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_size","value":"1024","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attrs_num_in_req","value":"90","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_s3_auth_use_keystone","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_account_in_url","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_enforce_content_length","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_versioning_enabled","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_trust_forwarded_https","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"auth_allow_insecure_global_id_reclaim","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"mon_warn_on_pool_no_redundancy","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr/cephadm/container_init","value":"True","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/migration_current","value":"6","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/use_repo_digest","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/orchestrator/orchestrator","value":"cephadm","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"osd","name":"osd_memory_target","value":"5502923980","level":"basic","can_update_at_runtime":true,"mask":"host:compute-1","location_type":"host","location_value":"compute-1"},{"section":"osd","name":"osd_memory_target_autotune","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""}]
Nov 29 06:19:59 compute-0 systemd[1]: libpod-29d3dddc1081012a762c26914632085030db3e154fa749037797d63e7e01d494.scope: Deactivated successfully.
Nov 29 06:19:59 compute-0 podman[91524]: 2025-11-29 06:19:59.435281246 +0000 UTC m=+0.819542342 container died 29d3dddc1081012a762c26914632085030db3e154fa749037797d63e7e01d494 (image=quay.io/ceph/ceph:v18, name=upbeat_engelbart, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 06:19:59 compute-0 ceph-mon[74654]: pgmap v121: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 06:19:59 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 06:19:59 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 06:19:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-8050e8cfa258648331c7f3a59d11576970b7f392272ea046a587b3dd5cec24ec-merged.mount: Deactivated successfully.
Nov 29 06:19:59 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v123: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 06:19:59 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 29 06:19:59 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 06:19:59 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 29 06:19:59 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 06:20:00 compute-0 ceph-mon[74654]: log_channel(cluster) log [ERR] : Health detail: HEALTH_ERR 1 filesystem is offline; 1 filesystem is online with fewer MDS than max_mds
Nov 29 06:20:00 compute-0 ceph-mon[74654]: log_channel(cluster) log [ERR] : [ERR] MDS_ALL_DOWN: 1 filesystem is offline
Nov 29 06:20:00 compute-0 ceph-mon[74654]: log_channel(cluster) log [ERR] :     fs cephfs is offline because no MDS is active for it.
Nov 29 06:20:00 compute-0 ceph-mon[74654]: log_channel(cluster) log [ERR] : [WRN] MDS_UP_LESS_THAN_MAX: 1 filesystem is online with fewer MDS than max_mds
Nov 29 06:20:00 compute-0 ceph-mon[74654]: log_channel(cluster) log [ERR] :     fs cephfs has 0 MDS online, but wants 1
Nov 29 06:20:00 compute-0 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mon-compute-0[74650]: 2025-11-29T06:19:59.999+0000 7fe45807e640 -1 log_channel(cluster) log [ERR] : Health detail: HEALTH_ERR 1 filesystem is offline; 1 filesystem is online with fewer MDS than max_mds
Nov 29 06:20:00 compute-0 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mon-compute-0[74650]: 2025-11-29T06:19:59.999+0000 7fe45807e640 -1 log_channel(cluster) log [ERR] : [ERR] MDS_ALL_DOWN: 1 filesystem is offline
Nov 29 06:20:00 compute-0 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mon-compute-0[74650]: 2025-11-29T06:19:59.999+0000 7fe45807e640 -1 log_channel(cluster) log [ERR] :     fs cephfs is offline because no MDS is active for it.
Nov 29 06:20:00 compute-0 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mon-compute-0[74650]: 2025-11-29T06:19:59.999+0000 7fe45807e640 -1 log_channel(cluster) log [ERR] : [WRN] MDS_UP_LESS_THAN_MAX: 1 filesystem is online with fewer MDS than max_mds
Nov 29 06:20:00 compute-0 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mon-compute-0[74650]: 2025-11-29T06:19:59.999+0000 7fe45807e640 -1 log_channel(cluster) log [ERR] :     fs cephfs has 0 MDS online, but wants 1
Nov 29 06:20:00 compute-0 ceph-mgr[74948]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/60987518; not ready for session (expect reconnect)
Nov 29 06:20:00 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 06:20:00 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 06:20:00 compute-0 ceph-mgr[74948]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 06:20:00 compute-0 podman[91524]: 2025-11-29 06:20:00.21678386 +0000 UTC m=+1.601044936 container remove 29d3dddc1081012a762c26914632085030db3e154fa749037797d63e7e01d494 (image=quay.io/ceph/ceph:v18, name=upbeat_engelbart, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 29 06:20:00 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e35 do_prune osdmap full prune enabled
Nov 29 06:20:00 compute-0 sudo[91521]: pam_unix(sudo:session): session closed for user root
Nov 29 06:20:00 compute-0 systemd[1]: libpod-conmon-29d3dddc1081012a762c26914632085030db3e154fa749037797d63e7e01d494.scope: Deactivated successfully.
Nov 29 06:20:01 compute-0 sudo[91599]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cfeegbmsshzpsotltkzhbndqmcvistxo ; /usr/bin/python3'
Nov 29 06:20:01 compute-0 sudo[91599]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:20:01 compute-0 ceph-mgr[74948]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/60987518; not ready for session (expect reconnect)
Nov 29 06:20:01 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 06:20:01 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 06:20:01 compute-0 ceph-mgr[74948]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 06:20:01 compute-0 python3[91601]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd get-require-min-compat-client _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:20:01 compute-0 podman[91602]: 2025-11-29 06:20:01.390569571 +0000 UTC m=+0.116990785 container create e9be07e73db8868bbabf7df55245924511d0ddd06f69d2577da4dc43d784ea73 (image=quay.io/ceph/ceph:v18, name=compassionate_yonath, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 29 06:20:01 compute-0 podman[91602]: 2025-11-29 06:20:01.299408021 +0000 UTC m=+0.025829275 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 06:20:01 compute-0 systemd[1]: Started libpod-conmon-e9be07e73db8868bbabf7df55245924511d0ddd06f69d2577da4dc43d784ea73.scope.
Nov 29 06:20:01 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:20:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51ce61ce7d7de35e788380e21334a28bc9c3137d8972029e46d5dbfdff1c502d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:20:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51ce61ce7d7de35e788380e21334a28bc9c3137d8972029e46d5dbfdff1c502d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:20:01 compute-0 podman[91602]: 2025-11-29 06:20:01.519347775 +0000 UTC m=+0.245769019 container init e9be07e73db8868bbabf7df55245924511d0ddd06f69d2577da4dc43d784ea73 (image=quay.io/ceph/ceph:v18, name=compassionate_yonath, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 29 06:20:01 compute-0 podman[91602]: 2025-11-29 06:20:01.524693923 +0000 UTC m=+0.251115127 container start e9be07e73db8868bbabf7df55245924511d0ddd06f69d2577da4dc43d784ea73 (image=quay.io/ceph/ceph:v18, name=compassionate_yonath, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 06:20:01 compute-0 podman[91602]: 2025-11-29 06:20:01.56712864 +0000 UTC m=+0.293549894 container attach e9be07e73db8868bbabf7df55245924511d0ddd06f69d2577da4dc43d784ea73 (image=quay.io/ceph/ceph:v18, name=compassionate_yonath, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 06:20:01 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 06:20:01 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Nov 29 06:20:01 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 06:20:01 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 06:20:01 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e36 e36: 3 total, 2 up, 3 in
Nov 29 06:20:01 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e36: 3 total, 2 up, 3 in
Nov 29 06:20:01 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 06:20:01 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 06:20:01 compute-0 ceph-mgr[74948]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 06:20:01 compute-0 ceph-mgr[74948]: [progress INFO root] update: starting ev b629c199-66cb-4b94-9dcf-515b4b078ad9 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Nov 29 06:20:01 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"} v 0) v1
Nov 29 06:20:01 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]: dispatch
Nov 29 06:20:01 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 36 pg[3.0( empty local-lis/les=14/15 n=0 ec=14/14 lis/c=14/14 les/c/f=15/15/0 sis=36 pruub=6.332367420s) [] r=-1 lpr=36 pi=[14,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 92.398216248s@ mbc={}] start_peering_interval up [] -> [], acting [] -> [], acting_primary ? -> -1, up_primary ? -> -1, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:20:01 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 36 pg[4.0( empty local-lis/les=16/17 n=0 ec=16/16 lis/c=16/16 les/c/f=17/17/0 sis=36 pruub=9.596056938s) [1] r=0 lpr=36 pi=[16,36)/1 crt=0'0 mlcod 0'0 active pruub 95.661926270s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:20:01 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 36 pg[4.0( empty local-lis/les=16/17 n=0 ec=16/16 lis/c=16/16 les/c/f=17/17/0 sis=36 pruub=9.596056938s) [1] r=0 lpr=36 pi=[16,36)/1 crt=0'0 mlcod 0'0 unknown pruub 95.661926270s@ mbc={}] state<Start>: transitioning to Primary
Nov 29 06:20:01 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 36 pg[3.0( empty local-lis/les=14/15 n=0 ec=14/14 lis/c=14/14 les/c/f=15/15/0 sis=36 pruub=6.332367420s) [] r=-1 lpr=36 pi=[14,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 92.398216248s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:01 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v125: 100 pgs: 62 unknown, 38 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Nov 29 06:20:01 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 29 06:20:01 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 06:20:02 compute-0 ceph-mgr[74948]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/60987518; not ready for session (expect reconnect)
Nov 29 06:20:02 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 06:20:02 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 06:20:02 compute-0 ceph-mgr[74948]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 06:20:02 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd get-require-min-compat-client"} v 0) v1
Nov 29 06:20:02 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3618548784' entity='client.admin' cmd=[{"prefix": "osd get-require-min-compat-client"}]: dispatch
Nov 29 06:20:02 compute-0 compassionate_yonath[91617]: mimic
Nov 29 06:20:02 compute-0 systemd[1]: libpod-e9be07e73db8868bbabf7df55245924511d0ddd06f69d2577da4dc43d784ea73.scope: Deactivated successfully.
Nov 29 06:20:02 compute-0 podman[91642]: 2025-11-29 06:20:02.220636272 +0000 UTC m=+0.043236141 container died e9be07e73db8868bbabf7df55245924511d0ddd06f69d2577da4dc43d784ea73 (image=quay.io/ceph/ceph:v18, name=compassionate_yonath, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 06:20:02 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e36 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 06:20:02 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e36 do_prune osdmap full prune enabled
Nov 29 06:20:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-51ce61ce7d7de35e788380e21334a28bc9c3137d8972029e46d5dbfdff1c502d-merged.mount: Deactivated successfully.
Nov 29 06:20:03 compute-0 ceph-mgr[74948]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/60987518; not ready for session (expect reconnect)
Nov 29 06:20:03 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 06:20:03 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 06:20:03 compute-0 ceph-mgr[74948]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 06:20:03 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 06:20:03 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Nov 29 06:20:03 compute-0 ceph-mon[74654]: osdmap e35: 3 total, 2 up, 3 in
Nov 29 06:20:03 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 06:20:03 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 06:20:03 compute-0 ceph-mon[74654]: from='client.? 192.168.122.100:0/2162770432' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 29 06:20:03 compute-0 ceph-mon[74654]: pgmap v123: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 06:20:03 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 06:20:03 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 06:20:03 compute-0 ceph-mon[74654]: Health detail: HEALTH_ERR 1 filesystem is offline; 1 filesystem is online with fewer MDS than max_mds
Nov 29 06:20:03 compute-0 ceph-mon[74654]: [ERR] MDS_ALL_DOWN: 1 filesystem is offline
Nov 29 06:20:03 compute-0 ceph-mon[74654]:     fs cephfs is offline because no MDS is active for it.
Nov 29 06:20:03 compute-0 ceph-mon[74654]: [WRN] MDS_UP_LESS_THAN_MAX: 1 filesystem is online with fewer MDS than max_mds
Nov 29 06:20:03 compute-0 ceph-mon[74654]:     fs cephfs has 0 MDS online, but wants 1
Nov 29 06:20:03 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 06:20:03 compute-0 podman[91642]: 2025-11-29 06:20:03.748622453 +0000 UTC m=+1.571222322 container remove e9be07e73db8868bbabf7df55245924511d0ddd06f69d2577da4dc43d784ea73 (image=quay.io/ceph/ceph:v18, name=compassionate_yonath, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0)
Nov 29 06:20:03 compute-0 systemd[1]: libpod-conmon-e9be07e73db8868bbabf7df55245924511d0ddd06f69d2577da4dc43d784ea73.scope: Deactivated successfully.
Nov 29 06:20:03 compute-0 sudo[91599]: pam_unix(sudo:session): session closed for user root
Nov 29 06:20:03 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v126: 100 pgs: 62 unknown, 38 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Nov 29 06:20:03 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 29 06:20:03 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 06:20:04 compute-0 ceph-mgr[74948]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/60987518; not ready for session (expect reconnect)
Nov 29 06:20:04 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 06:20:04 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 06:20:04 compute-0 ceph-mgr[74948]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 06:20:04 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Nov 29 06:20:04 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 06:20:04 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e37 e37: 3 total, 2 up, 3 in
Nov 29 06:20:04 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e37: 3 total, 2 up, 3 in
Nov 29 06:20:04 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 06:20:04 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 06:20:04 compute-0 ceph-mgr[74948]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 06:20:04 compute-0 ceph-mgr[74948]: [progress INFO root] update: starting ev fc739ab0-ca91-423f-b0ae-3ebb6cf4e220 (PG autoscaler increasing pool 6 PGs from 1 to 16)
Nov 29 06:20:04 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} v 0) v1
Nov 29 06:20:04 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 06:20:04 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[4.e( empty local-lis/les=16/17 n=0 ec=36/16 lis/c=16/16 les/c/f=17/17/0 sis=36) [1] r=0 lpr=36 pi=[16,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:20:04 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[4.d( empty local-lis/les=16/17 n=0 ec=36/16 lis/c=16/16 les/c/f=17/17/0 sis=36) [1] r=0 lpr=36 pi=[16,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:20:04 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[3.4( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=36) [] r=-1 lpr=36 pi=[14,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:04 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[3.9( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=36) [] r=-1 lpr=36 pi=[14,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:04 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[3.a( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=36) [] r=-1 lpr=36 pi=[14,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:04 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[3.2( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=36) [] r=-1 lpr=36 pi=[14,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:04 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[3.1( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=36) [] r=-1 lpr=36 pi=[14,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:04 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[4.5( empty local-lis/les=16/17 n=0 ec=36/16 lis/c=16/16 les/c/f=17/17/0 sis=36) [1] r=0 lpr=36 pi=[16,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:20:04 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[4.3( empty local-lis/les=16/17 n=0 ec=36/16 lis/c=16/16 les/c/f=17/17/0 sis=36) [1] r=0 lpr=36 pi=[16,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:20:04 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[4.1f( empty local-lis/les=16/17 n=0 ec=36/16 lis/c=16/16 les/c/f=17/17/0 sis=36) [1] r=0 lpr=36 pi=[16,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:20:04 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[4.6( empty local-lis/les=16/17 n=0 ec=36/16 lis/c=16/16 les/c/f=17/17/0 sis=36) [1] r=0 lpr=36 pi=[16,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:20:04 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[3.18( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=36) [] r=-1 lpr=36 pi=[14,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:04 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[4.1d( empty local-lis/les=16/17 n=0 ec=36/16 lis/c=16/16 les/c/f=17/17/0 sis=36) [1] r=0 lpr=36 pi=[16,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:20:04 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[3.1a( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=36) [] r=-1 lpr=36 pi=[14,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:04 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[4.f( empty local-lis/les=16/17 n=0 ec=36/16 lis/c=16/16 les/c/f=17/17/0 sis=36) [1] r=0 lpr=36 pi=[16,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:20:04 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[3.8( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=36) [] r=-1 lpr=36 pi=[14,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:04 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[4.13( empty local-lis/les=16/17 n=0 ec=36/16 lis/c=16/16 les/c/f=17/17/0 sis=36) [1] r=0 lpr=36 pi=[16,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:20:04 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[3.14( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=36) [] r=-1 lpr=36 pi=[14,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:04 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[4.15( empty local-lis/les=16/17 n=0 ec=36/16 lis/c=16/16 les/c/f=17/17/0 sis=36) [1] r=0 lpr=36 pi=[16,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:20:04 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[4.16( empty local-lis/les=16/17 n=0 ec=36/16 lis/c=16/16 les/c/f=17/17/0 sis=36) [1] r=0 lpr=36 pi=[16,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:20:04 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[3.12( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=36) [] r=-1 lpr=36 pi=[14,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:04 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[3.3( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=36) [] r=-1 lpr=36 pi=[14,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:04 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[3.11( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=36) [] r=-1 lpr=36 pi=[14,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:04 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[4.19( empty local-lis/les=16/17 n=0 ec=36/16 lis/c=16/16 les/c/f=17/17/0 sis=36) [1] r=0 lpr=36 pi=[16,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:20:04 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[4.4( empty local-lis/les=16/17 n=0 ec=36/16 lis/c=16/16 les/c/f=17/17/0 sis=36) [1] r=0 lpr=36 pi=[16,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:20:04 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[3.1e( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=36) [] r=-1 lpr=36 pi=[14,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:04 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[4.7( empty local-lis/les=16/17 n=0 ec=36/16 lis/c=16/16 les/c/f=17/17/0 sis=36) [1] r=0 lpr=36 pi=[16,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:20:04 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[3.7( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=36) [] r=-1 lpr=36 pi=[14,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:04 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[3.6( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=36) [] r=-1 lpr=36 pi=[14,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:04 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[4.1( empty local-lis/les=16/17 n=0 ec=36/16 lis/c=16/16 les/c/f=17/17/0 sis=36) [1] r=0 lpr=36 pi=[16,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:20:04 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[5.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=37 pruub=0.587849140s) [] r=-1 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 89.229385376s@ mbc={}] start_peering_interval up [] -> [], acting [] -> [], acting_primary ? -> -1, up_primary ? -> -1, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:20:04 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[3.5( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=36) [] r=-1 lpr=36 pi=[14,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:04 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[4.a( empty local-lis/les=16/17 n=0 ec=36/16 lis/c=16/16 les/c/f=17/17/0 sis=36) [1] r=0 lpr=36 pi=[16,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:20:04 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[3.d( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=36) [] r=-1 lpr=36 pi=[14,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:04 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[4.2( empty local-lis/les=16/17 n=0 ec=36/16 lis/c=16/16 les/c/f=17/17/0 sis=36) [1] r=0 lpr=36 pi=[16,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:20:04 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[4.18( empty local-lis/les=16/17 n=0 ec=36/16 lis/c=16/16 les/c/f=17/17/0 sis=36) [1] r=0 lpr=36 pi=[16,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:20:04 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[3.1f( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=36) [] r=-1 lpr=36 pi=[14,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:04 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[4.b( empty local-lis/les=16/17 n=0 ec=36/16 lis/c=16/16 les/c/f=17/17/0 sis=36) [1] r=0 lpr=36 pi=[16,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:20:04 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[3.c( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=36) [] r=-1 lpr=36 pi=[14,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:04 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[4.c( empty local-lis/les=16/17 n=0 ec=36/16 lis/c=16/16 les/c/f=17/17/0 sis=36) [1] r=0 lpr=36 pi=[16,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:20:04 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[3.b( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=36) [] r=-1 lpr=36 pi=[14,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:04 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[3.f( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=36) [] r=-1 lpr=36 pi=[14,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:04 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[4.8( empty local-lis/les=16/17 n=0 ec=36/16 lis/c=16/16 les/c/f=17/17/0 sis=36) [1] r=0 lpr=36 pi=[16,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:20:04 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[3.e( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=36) [] r=-1 lpr=36 pi=[14,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:04 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[4.9( empty local-lis/les=16/17 n=0 ec=36/16 lis/c=16/16 les/c/f=17/17/0 sis=36) [1] r=0 lpr=36 pi=[16,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:20:04 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[4.17( empty local-lis/les=16/17 n=0 ec=36/16 lis/c=16/16 les/c/f=17/17/0 sis=36) [1] r=0 lpr=36 pi=[16,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:20:04 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[3.10( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=36) [] r=-1 lpr=36 pi=[14,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:04 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[4.14( empty local-lis/les=16/17 n=0 ec=36/16 lis/c=16/16 les/c/f=17/17/0 sis=36) [1] r=0 lpr=36 pi=[16,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:20:04 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[3.13( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=36) [] r=-1 lpr=36 pi=[14,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:04 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[4.12( empty local-lis/les=16/17 n=0 ec=36/16 lis/c=16/16 les/c/f=17/17/0 sis=36) [1] r=0 lpr=36 pi=[16,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:20:04 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[3.15( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=36) [] r=-1 lpr=36 pi=[14,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:04 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[4.11( empty local-lis/les=16/17 n=0 ec=36/16 lis/c=16/16 les/c/f=17/17/0 sis=36) [1] r=0 lpr=36 pi=[16,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:20:04 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[3.16( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=36) [] r=-1 lpr=36 pi=[14,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:04 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[3.17( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=36) [] r=-1 lpr=36 pi=[14,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:04 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[4.10( empty local-lis/les=16/17 n=0 ec=36/16 lis/c=16/16 les/c/f=17/17/0 sis=36) [1] r=0 lpr=36 pi=[16,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:20:04 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[4.1e( empty local-lis/les=16/17 n=0 ec=36/16 lis/c=16/16 les/c/f=17/17/0 sis=36) [1] r=0 lpr=36 pi=[16,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:20:04 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[3.19( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=36) [] r=-1 lpr=36 pi=[14,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:04 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[4.1c( empty local-lis/les=16/17 n=0 ec=36/16 lis/c=16/16 les/c/f=17/17/0 sis=36) [1] r=0 lpr=36 pi=[16,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:20:04 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[3.1b( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=36) [] r=-1 lpr=36 pi=[14,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:04 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[4.1b( empty local-lis/les=16/17 n=0 ec=36/16 lis/c=16/16 les/c/f=17/17/0 sis=36) [1] r=0 lpr=36 pi=[16,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:20:04 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[3.1c( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=36) [] r=-1 lpr=36 pi=[14,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:04 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[3.1d( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=36) [] r=-1 lpr=36 pi=[14,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:04 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[4.1a( empty local-lis/les=16/17 n=0 ec=36/16 lis/c=16/16 les/c/f=17/17/0 sis=36) [1] r=0 lpr=36 pi=[16,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:20:04 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[5.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=37 pruub=0.587849140s) [] r=-1 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 89.229385376s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:04 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 06:20:04 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[4.d( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=16/16 les/c/f=17/17/0 sis=36) [1] r=0 lpr=36 pi=[16,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:20:04 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[4.e( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=16/16 les/c/f=17/17/0 sis=36) [1] r=0 lpr=36 pi=[16,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:20:04 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[4.5( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=16/16 les/c/f=17/17/0 sis=36) [1] r=0 lpr=36 pi=[16,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:20:04 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[4.3( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=16/16 les/c/f=17/17/0 sis=36) [1] r=0 lpr=36 pi=[16,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:20:04 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[4.6( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=16/16 les/c/f=17/17/0 sis=36) [1] r=0 lpr=36 pi=[16,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:20:04 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[4.1d( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=16/16 les/c/f=17/17/0 sis=36) [1] r=0 lpr=36 pi=[16,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:20:04 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[4.1f( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=16/16 les/c/f=17/17/0 sis=36) [1] r=0 lpr=36 pi=[16,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:20:04 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[4.f( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=16/16 les/c/f=17/17/0 sis=36) [1] r=0 lpr=36 pi=[16,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:20:04 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[4.13( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=16/16 les/c/f=17/17/0 sis=36) [1] r=0 lpr=36 pi=[16,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:20:04 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[4.16( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=16/16 les/c/f=17/17/0 sis=36) [1] r=0 lpr=36 pi=[16,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:20:04 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[4.15( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=16/16 les/c/f=17/17/0 sis=36) [1] r=0 lpr=36 pi=[16,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:20:04 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[4.19( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=16/16 les/c/f=17/17/0 sis=36) [1] r=0 lpr=36 pi=[16,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:20:04 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[4.7( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=16/16 les/c/f=17/17/0 sis=36) [1] r=0 lpr=36 pi=[16,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:20:04 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[4.4( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=16/16 les/c/f=17/17/0 sis=36) [1] r=0 lpr=36 pi=[16,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:20:04 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[4.1( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=16/16 les/c/f=17/17/0 sis=36) [1] r=0 lpr=36 pi=[16,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:20:04 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[4.2( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=16/16 les/c/f=17/17/0 sis=36) [1] r=0 lpr=36 pi=[16,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:20:04 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[4.a( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=16/16 les/c/f=17/17/0 sis=36) [1] r=0 lpr=36 pi=[16,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:20:04 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[4.b( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=16/16 les/c/f=17/17/0 sis=36) [1] r=0 lpr=36 pi=[16,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:20:04 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[4.18( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=16/16 les/c/f=17/17/0 sis=36) [1] r=0 lpr=36 pi=[16,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:20:04 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[4.c( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=16/16 les/c/f=17/17/0 sis=36) [1] r=0 lpr=36 pi=[16,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:20:04 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[4.8( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=16/16 les/c/f=17/17/0 sis=36) [1] r=0 lpr=36 pi=[16,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:20:04 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[4.9( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=16/16 les/c/f=17/17/0 sis=36) [1] r=0 lpr=36 pi=[16,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:20:04 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[4.17( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=16/16 les/c/f=17/17/0 sis=36) [1] r=0 lpr=36 pi=[16,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:20:04 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[4.14( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=16/16 les/c/f=17/17/0 sis=36) [1] r=0 lpr=36 pi=[16,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:20:04 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[4.12( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=16/16 les/c/f=17/17/0 sis=36) [1] r=0 lpr=36 pi=[16,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:20:04 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[4.1e( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=16/16 les/c/f=17/17/0 sis=36) [1] r=0 lpr=36 pi=[16,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:20:04 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[4.10( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=16/16 les/c/f=17/17/0 sis=36) [1] r=0 lpr=36 pi=[16,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:20:04 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[4.1b( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=16/16 les/c/f=17/17/0 sis=36) [1] r=0 lpr=36 pi=[16,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:20:04 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[4.11( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=16/16 les/c/f=17/17/0 sis=36) [1] r=0 lpr=36 pi=[16,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:20:04 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[4.1a( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=16/16 les/c/f=17/17/0 sis=36) [1] r=0 lpr=36 pi=[16,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:20:04 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[4.1c( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=16/16 les/c/f=17/17/0 sis=36) [1] r=0 lpr=36 pi=[16,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:20:04 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 37 pg[4.0( empty local-lis/les=36/37 n=0 ec=16/16 lis/c=16/16 les/c/f=17/17/0 sis=36) [1] r=0 lpr=36 pi=[16,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:20:04 compute-0 ceph-mgr[74948]: [progress WARNING root] Starting Global Recovery Event,93 pgs not in active + clean state
Nov 29 06:20:04 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 4.1 scrub starts
Nov 29 06:20:04 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 4.1 scrub ok
Nov 29 06:20:04 compute-0 sudo[91680]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dxipjdicjmxafpfapscnugkbosmvsely ; /usr/bin/python3'
Nov 29 06:20:04 compute-0 sudo[91680]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:20:04 compute-0 python3[91682]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   versions -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:20:04 compute-0 podman[91683]: 2025-11-29 06:20:04.843021213 +0000 UTC m=+0.043053716 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 06:20:04 compute-0 podman[91683]: 2025-11-29 06:20:04.935002557 +0000 UTC m=+0.135035020 container create 9f2f09217e54e2ad51183317230eeb57f8a93ffe6afe696267d50acfa2cdbabd (image=quay.io/ceph/ceph:v18, name=nervous_gauss, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 06:20:04 compute-0 systemd[1]: Started libpod-conmon-9f2f09217e54e2ad51183317230eeb57f8a93ffe6afe696267d50acfa2cdbabd.scope.
Nov 29 06:20:05 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 06:20:05 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 06:20:05 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Nov 29 06:20:05 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 06:20:05 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 06:20:05 compute-0 ceph-mon[74654]: osdmap e36: 3 total, 2 up, 3 in
Nov 29 06:20:05 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 06:20:05 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]: dispatch
Nov 29 06:20:05 compute-0 ceph-mon[74654]: pgmap v125: 100 pgs: 62 unknown, 38 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Nov 29 06:20:05 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 06:20:05 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 06:20:05 compute-0 ceph-mon[74654]: from='client.? 192.168.122.100:0/3618548784' entity='client.admin' cmd=[{"prefix": "osd get-require-min-compat-client"}]: dispatch
Nov 29 06:20:05 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 06:20:05 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 06:20:05 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 06:20:05 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Nov 29 06:20:05 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 06:20:05 compute-0 ceph-mon[74654]: osdmap e37: 3 total, 2 up, 3 in
Nov 29 06:20:05 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 06:20:05 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:20:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a61c7ad62fcfa25acf6093162de647c72f45a1049b41f8becb92fea647899af9/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:20:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a61c7ad62fcfa25acf6093162de647c72f45a1049b41f8becb92fea647899af9/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:20:05 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:20:05 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 06:20:05 compute-0 podman[91683]: 2025-11-29 06:20:05.109413992 +0000 UTC m=+0.309446535 container init 9f2f09217e54e2ad51183317230eeb57f8a93ffe6afe696267d50acfa2cdbabd (image=quay.io/ceph/ceph:v18, name=nervous_gauss, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 06:20:05 compute-0 ceph-mgr[74948]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/60987518; not ready for session (expect reconnect)
Nov 29 06:20:05 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 06:20:05 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 06:20:05 compute-0 ceph-mgr[74948]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 06:20:05 compute-0 podman[91683]: 2025-11-29 06:20:05.120414188 +0000 UTC m=+0.320446641 container start 9f2f09217e54e2ad51183317230eeb57f8a93ffe6afe696267d50acfa2cdbabd (image=quay.io/ceph/ceph:v18, name=nervous_gauss, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 06:20:05 compute-0 podman[91683]: 2025-11-29 06:20:05.151769577 +0000 UTC m=+0.351802050 container attach 9f2f09217e54e2ad51183317230eeb57f8a93ffe6afe696267d50acfa2cdbabd (image=quay.io/ceph/ceph:v18, name=nervous_gauss, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 29 06:20:05 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e37 do_prune osdmap full prune enabled
Nov 29 06:20:05 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 4.2 scrub starts
Nov 29 06:20:05 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 4.2 scrub ok
Nov 29 06:20:05 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:20:05 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0) v1
Nov 29 06:20:05 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Nov 29 06:20:05 compute-0 ceph-mgr[74948]: [cephadm INFO root] Adjusting osd_memory_target on compute-2 to 128.0M
Nov 29 06:20:05 compute-0 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-2 to 128.0M
Nov 29 06:20:05 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) v1
Nov 29 06:20:05 compute-0 ceph-mgr[74948]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-2 to 134217728: error parsing value: Value '134217728' is below minimum 939524096
Nov 29 06:20:05 compute-0 ceph-mgr[74948]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-2 to 134217728: error parsing value: Value '134217728' is below minimum 939524096
Nov 29 06:20:05 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 06:20:05 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:20:05 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 06:20:05 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 06:20:05 compute-0 ceph-mgr[74948]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Nov 29 06:20:05 compute-0 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Nov 29 06:20:05 compute-0 ceph-mgr[74948]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Nov 29 06:20:05 compute-0 ceph-mgr[74948]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Nov 29 06:20:05 compute-0 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Nov 29 06:20:05 compute-0 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Nov 29 06:20:05 compute-0 sudo[91722]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:20:05 compute-0 sudo[91722]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:20:05 compute-0 sudo[91722]: pam_unix(sudo:session): session closed for user root
Nov 29 06:20:05 compute-0 sudo[91747]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Nov 29 06:20:05 compute-0 sudo[91747]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:20:05 compute-0 sudo[91747]: pam_unix(sudo:session): session closed for user root
Nov 29 06:20:05 compute-0 sudo[91772]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:20:05 compute-0 sudo[91772]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:20:05 compute-0 sudo[91772]: pam_unix(sudo:session): session closed for user root
Nov 29 06:20:05 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "versions", "format": "json"} v 0) v1
Nov 29 06:20:05 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3247558833' entity='client.admin' cmd=[{"prefix": "versions", "format": "json"}]: dispatch
Nov 29 06:20:05 compute-0 nervous_gauss[91699]: 
Nov 29 06:20:05 compute-0 nervous_gauss[91699]: {"mon":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":3},"mgr":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":3},"osd":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":2},"overall":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":8}}
Nov 29 06:20:05 compute-0 systemd[1]: libpod-9f2f09217e54e2ad51183317230eeb57f8a93ffe6afe696267d50acfa2cdbabd.scope: Deactivated successfully.
Nov 29 06:20:05 compute-0 podman[91683]: 2025-11-29 06:20:05.788256545 +0000 UTC m=+0.988289028 container died 9f2f09217e54e2ad51183317230eeb57f8a93ffe6afe696267d50acfa2cdbabd (image=quay.io/ceph/ceph:v18, name=nervous_gauss, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 06:20:05 compute-0 sudo[91797]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-336ec58c-893b-528f-a0c1-6ed1196bc047/etc/ceph
Nov 29 06:20:05 compute-0 sudo[91797]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:20:05 compute-0 sudo[91797]: pam_unix(sudo:session): session closed for user root
Nov 29 06:20:05 compute-0 sudo[91835]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:20:05 compute-0 sudo[91835]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:20:05 compute-0 sudo[91835]: pam_unix(sudo:session): session closed for user root
Nov 29 06:20:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-a61c7ad62fcfa25acf6093162de647c72f45a1049b41f8becb92fea647899af9-merged.mount: Deactivated successfully.
Nov 29 06:20:05 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v128: 131 pgs: 62 unknown, 69 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Nov 29 06:20:05 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"} v 0) v1
Nov 29 06:20:05 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]: dispatch
Nov 29 06:20:05 compute-0 sudo[91861]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-336ec58c-893b-528f-a0c1-6ed1196bc047/etc/ceph/ceph.conf.new
Nov 29 06:20:05 compute-0 sudo[91861]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:20:05 compute-0 sudo[91861]: pam_unix(sudo:session): session closed for user root
Nov 29 06:20:06 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 06:20:06 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Nov 29 06:20:06 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e38 e38: 3 total, 2 up, 3 in
Nov 29 06:20:06 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e38: 3 total, 2 up, 3 in
Nov 29 06:20:06 compute-0 podman[91683]: 2025-11-29 06:20:06.069645838 +0000 UTC m=+1.269678291 container remove 9f2f09217e54e2ad51183317230eeb57f8a93ffe6afe696267d50acfa2cdbabd (image=quay.io/ceph/ceph:v18, name=nervous_gauss, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 29 06:20:06 compute-0 sudo[91886]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:20:06 compute-0 systemd[1]: libpod-conmon-9f2f09217e54e2ad51183317230eeb57f8a93ffe6afe696267d50acfa2cdbabd.scope: Deactivated successfully.
Nov 29 06:20:06 compute-0 sudo[91886]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:20:06 compute-0 sudo[91886]: pam_unix(sudo:session): session closed for user root
Nov 29 06:20:06 compute-0 sudo[91680]: pam_unix(sudo:session): session closed for user root
Nov 29 06:20:06 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 06:20:06 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 06:20:06 compute-0 ceph-mgr[74948]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 06:20:06 compute-0 ceph-mgr[74948]: [progress INFO root] update: starting ev 48b278c5-da9f-479f-8dad-a73732aa1447 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Nov 29 06:20:06 compute-0 ceph-mgr[74948]: [progress INFO root] complete: finished ev f17a2b4e-8ac5-45c2-afc8-67a9786cff10 (PG autoscaler increasing pool 3 PGs from 1 to 32)
Nov 29 06:20:06 compute-0 ceph-mgr[74948]: [progress INFO root] Completed event f17a2b4e-8ac5-45c2-afc8-67a9786cff10 (PG autoscaler increasing pool 3 PGs from 1 to 32) in 9 seconds
Nov 29 06:20:06 compute-0 ceph-mgr[74948]: [progress INFO root] complete: finished ev 67b0cd5d-139a-461d-8d6d-720f496a076f (PG autoscaler increasing pool 4 PGs from 1 to 32)
Nov 29 06:20:06 compute-0 ceph-mgr[74948]: [progress INFO root] Completed event 67b0cd5d-139a-461d-8d6d-720f496a076f (PG autoscaler increasing pool 4 PGs from 1 to 32) in 7 seconds
Nov 29 06:20:06 compute-0 ceph-mgr[74948]: [progress INFO root] complete: finished ev b629c199-66cb-4b94-9dcf-515b4b078ad9 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Nov 29 06:20:06 compute-0 ceph-mgr[74948]: [progress INFO root] Completed event b629c199-66cb-4b94-9dcf-515b4b078ad9 (PG autoscaler increasing pool 5 PGs from 1 to 32) in 4 seconds
Nov 29 06:20:06 compute-0 ceph-mgr[74948]: [progress INFO root] complete: finished ev fc739ab0-ca91-423f-b0ae-3ebb6cf4e220 (PG autoscaler increasing pool 6 PGs from 1 to 16)
Nov 29 06:20:06 compute-0 ceph-mgr[74948]: [progress INFO root] Completed event fc739ab0-ca91-423f-b0ae-3ebb6cf4e220 (PG autoscaler increasing pool 6 PGs from 1 to 16) in 2 seconds
Nov 29 06:20:06 compute-0 ceph-mgr[74948]: [progress INFO root] complete: finished ev 48b278c5-da9f-479f-8dad-a73732aa1447 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Nov 29 06:20:06 compute-0 ceph-mgr[74948]: [progress INFO root] Completed event 48b278c5-da9f-479f-8dad-a73732aa1447 (PG autoscaler increasing pool 7 PGs from 1 to 32) in 0 seconds
Nov 29 06:20:06 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 38 pg[5.c( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [] r=-1 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:06 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 38 pg[5.2( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [] r=-1 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:06 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 38 pg[5.4( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [] r=-1 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:06 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 38 pg[5.7( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [] r=-1 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:06 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 38 pg[5.1e( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [] r=-1 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:06 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 38 pg[5.1c( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [] r=-1 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:06 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 38 pg[5.e( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [] r=-1 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:06 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 38 pg[5.12( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [] r=-1 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:06 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 38 pg[5.14( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [] r=-1 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:06 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 38 pg[5.f( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [] r=-1 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:06 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 38 pg[5.5( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [] r=-1 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:06 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 38 pg[5.17( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [] r=-1 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:06 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 38 pg[5.18( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [] r=-1 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:06 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 38 pg[5.6( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [] r=-1 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:06 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 38 pg[5.1( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [] r=-1 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:06 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 38 pg[5.3( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [] r=-1 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:06 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 38 pg[5.b( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [] r=-1 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:06 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 38 pg[5.a( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [] r=-1 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:06 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 38 pg[5.d( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [] r=-1 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:06 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 38 pg[5.19( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [] r=-1 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:06 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 38 pg[5.9( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [] r=-1 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:06 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 38 pg[5.8( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [] r=-1 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:06 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 38 pg[5.16( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [] r=-1 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:06 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 38 pg[5.15( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [] r=-1 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:06 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 38 pg[5.13( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [] r=-1 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:06 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 38 pg[5.10( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [] r=-1 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:06 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 38 pg[5.11( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [] r=-1 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:06 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 38 pg[5.1f( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [] r=-1 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:06 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 38 pg[5.1d( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [] r=-1 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:06 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 38 pg[5.1a( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [] r=-1 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:06 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 38 pg[5.1b( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [] r=-1 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:06 compute-0 ceph-mgr[74948]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/60987518; not ready for session (expect reconnect)
Nov 29 06:20:06 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 06:20:06 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 06:20:06 compute-0 ceph-mgr[74948]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 06:20:06 compute-0 sudo[91911]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-336ec58c-893b-528f-a0c1-6ed1196bc047
Nov 29 06:20:06 compute-0 sudo[91911]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:20:06 compute-0 sudo[91911]: pam_unix(sudo:session): session closed for user root
Nov 29 06:20:06 compute-0 sudo[91936]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:20:06 compute-0 sudo[91936]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:20:06 compute-0 sudo[91936]: pam_unix(sudo:session): session closed for user root
Nov 29 06:20:06 compute-0 sudo[91961]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-336ec58c-893b-528f-a0c1-6ed1196bc047/etc/ceph/ceph.conf.new
Nov 29 06:20:06 compute-0 sudo[91961]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:20:06 compute-0 sudo[91961]: pam_unix(sudo:session): session closed for user root
Nov 29 06:20:06 compute-0 sudo[92009]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:20:06 compute-0 sudo[92009]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:20:06 compute-0 sudo[92009]: pam_unix(sudo:session): session closed for user root
Nov 29 06:20:06 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 4.3 scrub starts
Nov 29 06:20:06 compute-0 sudo[92034]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-336ec58c-893b-528f-a0c1-6ed1196bc047/etc/ceph/ceph.conf.new
Nov 29 06:20:06 compute-0 sudo[92034]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:20:06 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 4.3 scrub ok
Nov 29 06:20:06 compute-0 sudo[92034]: pam_unix(sudo:session): session closed for user root
Nov 29 06:20:06 compute-0 sudo[92059]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:20:06 compute-0 sudo[92059]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:20:06 compute-0 sudo[92059]: pam_unix(sudo:session): session closed for user root
Nov 29 06:20:06 compute-0 sudo[92084]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-336ec58c-893b-528f-a0c1-6ed1196bc047/etc/ceph/ceph.conf.new
Nov 29 06:20:06 compute-0 sudo[92084]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:20:06 compute-0 sudo[92084]: pam_unix(sudo:session): session closed for user root
Nov 29 06:20:06 compute-0 sudo[92109]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:20:06 compute-0 sudo[92109]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:20:06 compute-0 sudo[92109]: pam_unix(sudo:session): session closed for user root
Nov 29 06:20:06 compute-0 sudo[92134]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-336ec58c-893b-528f-a0c1-6ed1196bc047/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Nov 29 06:20:06 compute-0 sudo[92134]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:20:06 compute-0 sudo[92134]: pam_unix(sudo:session): session closed for user root
Nov 29 06:20:06 compute-0 ceph-mgr[74948]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/config/ceph.conf
Nov 29 06:20:06 compute-0 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/config/ceph.conf
Nov 29 06:20:06 compute-0 sudo[92159]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:20:06 compute-0 sudo[92159]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:20:06 compute-0 sudo[92159]: pam_unix(sudo:session): session closed for user root
Nov 29 06:20:06 compute-0 ceph-mgr[74948]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/config/ceph.conf
Nov 29 06:20:06 compute-0 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/config/ceph.conf
Nov 29 06:20:06 compute-0 sudo[92184]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/config
Nov 29 06:20:06 compute-0 sudo[92184]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:20:06 compute-0 sudo[92184]: pam_unix(sudo:session): session closed for user root
Nov 29 06:20:06 compute-0 ceph-mgr[74948]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/config/ceph.conf
Nov 29 06:20:06 compute-0 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/config/ceph.conf
Nov 29 06:20:06 compute-0 sudo[92209]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:20:06 compute-0 sudo[92209]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:20:06 compute-0 sudo[92209]: pam_unix(sudo:session): session closed for user root
Nov 29 06:20:07 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e38 do_prune osdmap full prune enabled
Nov 29 06:20:07 compute-0 sudo[92234]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-336ec58c-893b-528f-a0c1-6ed1196bc047/var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/config
Nov 29 06:20:07 compute-0 sudo[92234]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:20:07 compute-0 sudo[92234]: pam_unix(sudo:session): session closed for user root
Nov 29 06:20:07 compute-0 ceph-mgr[74948]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/60987518; not ready for session (expect reconnect)
Nov 29 06:20:07 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 06:20:07 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 06:20:07 compute-0 sudo[92259]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:20:07 compute-0 ceph-mgr[74948]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 06:20:07 compute-0 sudo[92259]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:20:07 compute-0 sudo[92259]: pam_unix(sudo:session): session closed for user root
Nov 29 06:20:07 compute-0 sudo[92284]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-336ec58c-893b-528f-a0c1-6ed1196bc047/var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/config/ceph.conf.new
Nov 29 06:20:07 compute-0 sudo[92284]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:20:07 compute-0 sudo[92284]: pam_unix(sudo:session): session closed for user root
Nov 29 06:20:07 compute-0 sudo[92309]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:20:07 compute-0 sudo[92309]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:20:07 compute-0 sudo[92309]: pam_unix(sudo:session): session closed for user root
Nov 29 06:20:07 compute-0 sudo[92334]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-336ec58c-893b-528f-a0c1-6ed1196bc047
Nov 29 06:20:07 compute-0 sudo[92334]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:20:07 compute-0 sudo[92334]: pam_unix(sudo:session): session closed for user root
Nov 29 06:20:07 compute-0 sudo[92359]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:20:07 compute-0 sudo[92359]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:20:07 compute-0 sudo[92359]: pam_unix(sudo:session): session closed for user root
Nov 29 06:20:07 compute-0 sudo[92384]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-336ec58c-893b-528f-a0c1-6ed1196bc047/var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/config/ceph.conf.new
Nov 29 06:20:07 compute-0 sudo[92384]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:20:07 compute-0 sudo[92384]: pam_unix(sudo:session): session closed for user root
Nov 29 06:20:07 compute-0 sudo[92432]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:20:07 compute-0 sudo[92432]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:20:07 compute-0 sudo[92432]: pam_unix(sudo:session): session closed for user root
Nov 29 06:20:07 compute-0 sudo[92457]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-336ec58c-893b-528f-a0c1-6ed1196bc047/var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/config/ceph.conf.new
Nov 29 06:20:07 compute-0 sudo[92457]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:20:07 compute-0 sudo[92457]: pam_unix(sudo:session): session closed for user root
Nov 29 06:20:07 compute-0 sudo[92482]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:20:07 compute-0 sudo[92482]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:20:07 compute-0 sudo[92482]: pam_unix(sudo:session): session closed for user root
Nov 29 06:20:07 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v130: 131 pgs: 62 unknown, 69 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Nov 29 06:20:07 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 29 06:20:07 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 06:20:07 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"} v 0) v1
Nov 29 06:20:07 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]: dispatch
Nov 29 06:20:07 compute-0 sudo[92507]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-336ec58c-893b-528f-a0c1-6ed1196bc047/var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/config/ceph.conf.new
Nov 29 06:20:07 compute-0 sudo[92507]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:20:07 compute-0 sudo[92507]: pam_unix(sudo:session): session closed for user root
Nov 29 06:20:08 compute-0 sudo[92532]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:20:08 compute-0 sudo[92532]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:20:08 compute-0 sudo[92532]: pam_unix(sudo:session): session closed for user root
Nov 29 06:20:08 compute-0 ceph-mgr[74948]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/60987518; not ready for session (expect reconnect)
Nov 29 06:20:08 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 06:20:08 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 06:20:08 compute-0 ceph-mgr[74948]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 06:20:08 compute-0 sudo[92557]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-336ec58c-893b-528f-a0c1-6ed1196bc047/var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/config/ceph.conf.new /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/config/ceph.conf
Nov 29 06:20:08 compute-0 sudo[92557]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:20:08 compute-0 sudo[92557]: pam_unix(sudo:session): session closed for user root
Nov 29 06:20:08 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 06:20:08 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 06:20:08 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 06:20:09 compute-0 ceph-mgr[74948]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/60987518; not ready for session (expect reconnect)
Nov 29 06:20:09 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 06:20:09 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 06:20:09 compute-0 ceph-mgr[74948]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 06:20:09 compute-0 ceph-mgr[74948]: [progress INFO root] Writing back 11 completed events
Nov 29 06:20:09 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 29 06:20:09 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v131: 131 pgs: 62 unknown, 69 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Nov 29 06:20:09 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 29 06:20:09 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 06:20:09 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"} v 0) v1
Nov 29 06:20:09 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]: dispatch
Nov 29 06:20:10 compute-0 ceph-mgr[74948]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/60987518; not ready for session (expect reconnect)
Nov 29 06:20:10 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 06:20:10 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 06:20:10 compute-0 ceph-mgr[74948]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 06:20:10 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 4.4 scrub starts
Nov 29 06:20:10 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 4.4 scrub ok
Nov 29 06:20:11 compute-0 ceph-mgr[74948]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/60987518; not ready for session (expect reconnect)
Nov 29 06:20:11 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 06:20:11 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 06:20:11 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v132: 131 pgs: 62 unknown, 69 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Nov 29 06:20:11 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 29 06:20:11 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 06:20:11 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"} v 0) v1
Nov 29 06:20:11 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]: dispatch
Nov 29 06:20:12 compute-0 ceph-mgr[74948]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/60987518; not ready for session (expect reconnect)
Nov 29 06:20:12 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 4.5 scrub starts
Nov 29 06:20:12 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 4.5 scrub ok
Nov 29 06:20:13 compute-0 ceph-mgr[74948]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/60987518; not ready for session (expect reconnect)
Nov 29 06:20:13 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 4.6 scrub starts
Nov 29 06:20:13 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 4.6 scrub ok
Nov 29 06:20:13 compute-0 ceph-mon[74654]: pgmap v126: 100 pgs: 62 unknown, 38 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Nov 29 06:20:13 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 06:20:13 compute-0 ceph-mon[74654]: 4.1 scrub starts
Nov 29 06:20:13 compute-0 ceph-mon[74654]: 4.1 scrub ok
Nov 29 06:20:13 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:20:13 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 06:20:13 compute-0 ceph-mon[74654]: 4.2 scrub starts
Nov 29 06:20:13 compute-0 ceph-mon[74654]: 4.2 scrub ok
Nov 29 06:20:13 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:20:13 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Nov 29 06:20:13 compute-0 ceph-mon[74654]: Adjusting osd_memory_target on compute-2 to 128.0M
Nov 29 06:20:13 compute-0 ceph-mon[74654]: Unable to set osd_memory_target on compute-2 to 134217728: error parsing value: Value '134217728' is below minimum 939524096
Nov 29 06:20:13 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:20:13 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 06:20:13 compute-0 ceph-mon[74654]: Updating compute-0:/etc/ceph/ceph.conf
Nov 29 06:20:13 compute-0 ceph-mon[74654]: Updating compute-1:/etc/ceph/ceph.conf
Nov 29 06:20:13 compute-0 ceph-mon[74654]: Updating compute-2:/etc/ceph/ceph.conf
Nov 29 06:20:13 compute-0 ceph-mon[74654]: from='client.? 192.168.122.100:0/3247558833' entity='client.admin' cmd=[{"prefix": "versions", "format": "json"}]: dispatch
Nov 29 06:20:13 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]: dispatch
Nov 29 06:20:13 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v133: 131 pgs: 62 unknown, 69 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Nov 29 06:20:13 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 29 06:20:13 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 06:20:13 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"} v 0) v1
Nov 29 06:20:13 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]: dispatch
Nov 29 06:20:14 compute-0 ceph-mgr[74948]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/60987518; not ready for session (expect reconnect)
Nov 29 06:20:14 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Nov 29 06:20:14 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e39 e39: 3 total, 2 up, 3 in
Nov 29 06:20:14 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:20:14 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e39: 3 total, 2 up, 3 in
Nov 29 06:20:14 compute-0 ceph-mgr[74948]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 06:20:14 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 06:20:15 compute-0 ceph-mgr[74948]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/60987518; not ready for session (expect reconnect)
Nov 29 06:20:15 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 06:20:15 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 06:20:15 compute-0 ceph-mgr[74948]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 06:20:15 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:20:15 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 06:20:15 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e39 do_prune osdmap full prune enabled
Nov 29 06:20:15 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v135: 146 pgs: 77 unknown, 69 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Nov 29 06:20:15 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 29 06:20:15 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 06:20:16 compute-0 ceph-mon[74654]: pgmap v128: 131 pgs: 62 unknown, 69 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Nov 29 06:20:16 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 06:20:16 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Nov 29 06:20:16 compute-0 ceph-mon[74654]: osdmap e38: 3 total, 2 up, 3 in
Nov 29 06:20:16 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 06:20:16 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 06:20:16 compute-0 ceph-mon[74654]: 4.3 scrub starts
Nov 29 06:20:16 compute-0 ceph-mon[74654]: 4.3 scrub ok
Nov 29 06:20:16 compute-0 ceph-mon[74654]: Updating compute-0:/var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/config/ceph.conf
Nov 29 06:20:16 compute-0 ceph-mon[74654]: Updating compute-1:/var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/config/ceph.conf
Nov 29 06:20:16 compute-0 ceph-mon[74654]: Updating compute-2:/var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/config/ceph.conf
Nov 29 06:20:16 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 06:20:16 compute-0 ceph-mon[74654]: pgmap v130: 131 pgs: 62 unknown, 69 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Nov 29 06:20:16 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 06:20:16 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]: dispatch
Nov 29 06:20:16 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 06:20:16 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 06:20:16 compute-0 ceph-mon[74654]: OSD bench result of 1381.921175 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 29 06:20:16 compute-0 ceph-mon[74654]: pgmap v131: 131 pgs: 62 unknown, 69 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Nov 29 06:20:16 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 06:20:16 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]: dispatch
Nov 29 06:20:16 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 06:20:16 compute-0 ceph-mon[74654]: 4.4 scrub starts
Nov 29 06:20:16 compute-0 ceph-mon[74654]: 4.4 scrub ok
Nov 29 06:20:16 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 06:20:16 compute-0 ceph-mon[74654]: pgmap v132: 131 pgs: 62 unknown, 69 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Nov 29 06:20:16 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 06:20:16 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]: dispatch
Nov 29 06:20:16 compute-0 ceph-mon[74654]: 4.5 scrub starts
Nov 29 06:20:16 compute-0 ceph-mon[74654]: 4.5 scrub ok
Nov 29 06:20:16 compute-0 ceph-mon[74654]: 4.6 scrub starts
Nov 29 06:20:16 compute-0 ceph-mon[74654]: 4.6 scrub ok
Nov 29 06:20:16 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 06:20:16 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]: dispatch
Nov 29 06:20:16 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Nov 29 06:20:16 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 39 pg[6.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=39 pruub=8.360642433s) [1] r=0 lpr=39 pi=[20,39)/1 crt=0'0 mlcod 0'0 active pruub 108.834419250s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:20:16 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 39 pg[6.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=39 pruub=8.360642433s) [1] r=0 lpr=39 pi=[20,39)/1 crt=0'0 mlcod 0'0 unknown pruub 108.834419250s@ mbc={}] state<Start>: transitioning to Primary
Nov 29 06:20:16 compute-0 ceph-mgr[74948]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/60987518; not ready for session (expect reconnect)
Nov 29 06:20:16 compute-0 sshd-session[92582]: Invalid user alex from 138.124.186.225 port 35016
Nov 29 06:20:16 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:20:16 compute-0 sshd-session[92582]: Received disconnect from 138.124.186.225 port 35016:11: Bye Bye [preauth]
Nov 29 06:20:16 compute-0 sshd-session[92582]: Disconnected from invalid user alex 138.124.186.225 port 35016 [preauth]
Nov 29 06:20:16 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 06:20:16 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 06:20:16 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 06:20:16 compute-0 ceph-mgr[74948]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 06:20:17 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 06:20:17 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Nov 29 06:20:17 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 06:20:17 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Nov 29 06:20:17 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 06:20:17 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Nov 29 06:20:17 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 06:20:17 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Nov 29 06:20:17 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e40 e40: 3 total, 3 up, 3 in
Nov 29 06:20:17 compute-0 ceph-mgr[74948]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/60987518; not ready for session (expect reconnect)
Nov 29 06:20:17 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:20:17 compute-0 ceph-mon[74654]: log_channel(cluster) log [INF] : osd.2 [v2:192.168.122.102:6800/60987518,v1:192.168.122.102:6801/60987518] boot
Nov 29 06:20:17 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e40: 3 total, 3 up, 3 in
Nov 29 06:20:17 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 06:20:17 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 06:20:17 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e40 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 06:20:17 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:20:17 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v137: 177 pgs: 108 unknown, 69 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Nov 29 06:20:18 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e40 do_prune osdmap full prune enabled
Nov 29 06:20:19 compute-0 sshd-session[92584]: Received disconnect from 79.116.35.29 port 50296:11: Bye Bye [preauth]
Nov 29 06:20:19 compute-0 sshd-session[92584]: Disconnected from authenticating user root 79.116.35.29 port 50296 [preauth]
Nov 29 06:20:19 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 4.7 scrub starts
Nov 29 06:20:19 compute-0 ceph-mon[74654]: pgmap v133: 131 pgs: 62 unknown, 69 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Nov 29 06:20:19 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:20:19 compute-0 ceph-mon[74654]: osdmap e39: 3 total, 2 up, 3 in
Nov 29 06:20:19 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 06:20:19 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:20:19 compute-0 ceph-mon[74654]: pgmap v135: 146 pgs: 77 unknown, 69 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Nov 29 06:20:19 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 06:20:19 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:20:19 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 06:20:19 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 06:20:19 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Nov 29 06:20:19 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 06:20:19 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Nov 29 06:20:19 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 06:20:19 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Nov 29 06:20:19 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 06:20:19 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Nov 29 06:20:19 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:20:19 compute-0 ceph-mon[74654]: osd.2 [v2:192.168.122.102:6800/60987518,v1:192.168.122.102:6801/60987518] boot
Nov 29 06:20:19 compute-0 ceph-mon[74654]: osdmap e40: 3 total, 3 up, 3 in
Nov 29 06:20:19 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[6.f( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=39) [1] r=0 lpr=39 pi=[20,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[5.c( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=-1 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[6.c( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=39) [1] r=0 lpr=39 pi=[20,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[3.a( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=40) [2] r=-1 lpr=40 pi=[14,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[5.c( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=-1 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[3.a( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=40) [2] r=-1 lpr=40 pi=[14,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[5.f( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=-1 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[5.f( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=-1 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[3.9( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=40) [2] r=-1 lpr=40 pi=[14,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[5.2( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=-1 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[6.1( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=39) [1] r=0 lpr=39 pi=[20,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[3.9( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=40) [2] r=-1 lpr=40 pi=[14,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[5.2( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=-1 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[6.7( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=39) [1] r=0 lpr=39 pi=[20,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[5.4( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=-1 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[3.4( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=40) [2] r=-1 lpr=40 pi=[14,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[5.4( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=-1 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[3.2( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=40) [2] r=-1 lpr=40 pi=[14,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[3.2( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=40) [2] r=-1 lpr=40 pi=[14,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[6.4( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=39) [1] r=0 lpr=39 pi=[20,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[3.4( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=40) [2] r=-1 lpr=40 pi=[14,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[5.7( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=-1 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[3.1( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=40) [2] r=-1 lpr=40 pi=[14,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[3.1( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=40) [2] r=-1 lpr=40 pi=[14,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[5.7( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=-1 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[5.1e( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=-1 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[5.1e( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=-1 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[5.1c( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=-1 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[3.1a( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=40) [2] r=-1 lpr=40 pi=[14,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[3.1a( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=40) [2] r=-1 lpr=40 pi=[14,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[3.18( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=40) [2] r=-1 lpr=40 pi=[14,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[5.1c( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=-1 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[3.18( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=40) [2] r=-1 lpr=40 pi=[14,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[2.1b( empty local-lis/les=24/25 n=0 ec=17/12 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=-1 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[5.e( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=-1 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[2.1b( empty local-lis/les=24/25 n=0 ec=17/12 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=-1 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[5.e( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=-1 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[5.12( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=-1 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[2.15( empty local-lis/les=24/25 n=0 ec=17/12 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=-1 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[5.12( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=-1 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[2.15( empty local-lis/les=24/25 n=0 ec=17/12 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=-1 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[5.14( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=-1 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[3.14( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=40) [2] r=-1 lpr=40 pi=[14,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[5.14( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=-1 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[2.13( empty local-lis/les=24/25 n=0 ec=17/12 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=-1 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[2.13( empty local-lis/les=24/25 n=0 ec=17/12 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=-1 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[3.12( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=40) [2] r=-1 lpr=40 pi=[14,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[3.14( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=40) [2] r=-1 lpr=40 pi=[14,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[3.12( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=40) [2] r=-1 lpr=40 pi=[14,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[5.17( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=-1 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[2.10( empty local-lis/les=24/25 n=0 ec=17/12 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=-1 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[5.17( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=-1 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[2.10( empty local-lis/les=24/25 n=0 ec=17/12 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=-1 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[6.d( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=39) [1] r=0 lpr=39 pi=[20,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[3.11( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=40) [2] r=-1 lpr=40 pi=[14,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[5.5( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=-1 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[3.11( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=40) [2] r=-1 lpr=40 pi=[14,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[5.5( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=-1 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[3.8( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=40) [2] r=-1 lpr=40 pi=[14,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[6.6( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=39) [1] r=0 lpr=39 pi=[20,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[5.18( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=-1 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[5.18( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=-1 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[3.3( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=40) [2] r=-1 lpr=40 pi=[14,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[3.8( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=40) [2] r=-1 lpr=40 pi=[14,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[3.3( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=40) [2] r=-1 lpr=40 pi=[14,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[5.6( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=-1 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[6.5( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=39) [1] r=0 lpr=39 pi=[20,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[6.2( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=39) [1] r=0 lpr=39 pi=[20,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[5.6( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=-1 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[3.1e( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=40) [2] r=-1 lpr=40 pi=[14,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[5.1( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=-1 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[3.1e( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=40) [2] r=-1 lpr=40 pi=[14,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[3.0( empty local-lis/les=14/15 n=0 ec=14/14 lis/c=14/14 les/c/f=15/15/0 sis=40) [2] r=-1 lpr=40 pi=[14,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[5.1( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=-1 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[3.0( empty local-lis/les=14/15 n=0 ec=14/14 lis/c=14/14 les/c/f=15/15/0 sis=40) [2] r=-1 lpr=40 pi=[14,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[3.7( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=40) [2] r=-1 lpr=40 pi=[14,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[3.7( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=40) [2] r=-1 lpr=40 pi=[14,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[3.6( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=40) [2] r=-1 lpr=40 pi=[14,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[5.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=-1 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[6.3( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=39) [1] r=0 lpr=39 pi=[20,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[5.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=-1 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[5.3( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=-1 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[3.6( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=40) [2] r=-1 lpr=40 pi=[14,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[5.3( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=-1 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[3.5( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=40) [2] r=-1 lpr=40 pi=[14,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[6.8( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=39) [1] r=0 lpr=39 pi=[20,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[3.5( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=40) [2] r=-1 lpr=40 pi=[14,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[5.19( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=-1 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[2.c( empty local-lis/les=24/25 n=0 ec=17/12 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=-1 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[3.d( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=40) [2] r=-1 lpr=40 pi=[14,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[2.c( empty local-lis/les=24/25 n=0 ec=17/12 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=-1 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[3.d( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=40) [2] r=-1 lpr=40 pi=[14,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[3.1f( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=40) [2] r=-1 lpr=40 pi=[14,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[3.1f( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=40) [2] r=-1 lpr=40 pi=[14,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[5.19( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=-1 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[6.9( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=39) [1] r=0 lpr=39 pi=[20,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[5.a( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=-1 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[2.d( empty local-lis/les=24/25 n=0 ec=17/12 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=-1 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[2.d( empty local-lis/les=24/25 n=0 ec=17/12 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=-1 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[6.e( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=39) [1] r=0 lpr=39 pi=[20,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[5.a( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=-1 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[3.c( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=40) [2] r=-1 lpr=40 pi=[14,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[5.b( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=-1 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[3.c( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=40) [2] r=-1 lpr=40 pi=[14,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[5.b( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=-1 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[5.d( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=-1 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[2.a( empty local-lis/les=24/25 n=0 ec=17/12 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=-1 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[6.a( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=39) [1] r=0 lpr=39 pi=[20,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[5.d( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=-1 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[3.b( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=40) [2] r=-1 lpr=40 pi=[14,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[5.9( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=-1 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[2.a( empty local-lis/les=24/25 n=0 ec=17/12 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=-1 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[3.b( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=40) [2] r=-1 lpr=40 pi=[14,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[5.9( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=-1 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[6.b( empty local-lis/les=20/21 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=39) [1] r=0 lpr=39 pi=[20,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[5.8( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=-1 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[3.f( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=40) [2] r=-1 lpr=40 pi=[14,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[3.f( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=40) [2] r=-1 lpr=40 pi=[14,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[5.16( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=-1 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[5.16( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=-1 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[5.8( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=-1 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[5.15( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=-1 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[3.e( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=40) [2] r=-1 lpr=40 pi=[14,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[3.10( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=40) [2] r=-1 lpr=40 pi=[14,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[5.15( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=-1 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[3.10( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=40) [2] r=-1 lpr=40 pi=[14,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[3.13( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=40) [2] r=-1 lpr=40 pi=[14,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[3.13( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=40) [2] r=-1 lpr=40 pi=[14,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[3.15( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=40) [2] r=-1 lpr=40 pi=[14,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[3.15( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=40) [2] r=-1 lpr=40 pi=[14,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[5.13( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=-1 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[5.10( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=-1 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[5.13( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=-1 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[3.16( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=40) [2] r=-1 lpr=40 pi=[14,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[5.10( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=-1 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[3.16( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=40) [2] r=-1 lpr=40 pi=[14,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[5.11( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=-1 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[3.e( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=40) [2] r=-1 lpr=40 pi=[14,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[3.17( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=40) [2] r=-1 lpr=40 pi=[14,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[5.11( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=-1 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[3.17( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=40) [2] r=-1 lpr=40 pi=[14,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[5.1d( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=-1 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[5.1d( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=-1 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[3.19( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=40) [2] r=-1 lpr=40 pi=[14,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[3.1b( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=40) [2] r=-1 lpr=40 pi=[14,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[5.1a( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=-1 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[3.19( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=40) [2] r=-1 lpr=40 pi=[14,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[5.1a( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=-1 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[3.1b( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=40) [2] r=-1 lpr=40 pi=[14,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[3.1c( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=40) [2] r=-1 lpr=40 pi=[14,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[5.1b( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=-1 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[5.1b( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=-1 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[3.1d( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=40) [2] r=-1 lpr=40 pi=[14,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:20:19 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 4.7 scrub ok
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[3.1c( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=40) [2] r=-1 lpr=40 pi=[14,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[5.1f( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=-1 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[5.1f( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=40) [2] r=-1 lpr=40 pi=[18,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:19 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 40 pg[3.1d( empty local-lis/les=14/15 n=0 ec=36/14 lis/c=14/14 les/c/f=15/15/0 sis=40) [2] r=-1 lpr=40 pi=[14,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:19 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:20:19 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v138: 177 pgs: 7 peering, 108 unknown, 62 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:20:21 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v139: 177 pgs: 24 peering, 93 unknown, 60 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:20:23 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v140: 177 pgs: 95 peering, 31 unknown, 51 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:20:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:20:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:20:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:20:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:20:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:20:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:20:24 compute-0 sshd-session[92586]: Invalid user localhost from 104.208.108.166 port 18272
Nov 29 06:20:24 compute-0 sshd-session[92586]: Received disconnect from 104.208.108.166 port 18272:11: Bye Bye [preauth]
Nov 29 06:20:24 compute-0 sshd-session[92586]: Disconnected from invalid user localhost 104.208.108.166 port 18272 [preauth]
Nov 29 06:20:25 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v141: 177 pgs: 95 peering, 31 unknown, 51 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:20:27 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 4.8 deep-scrub starts
Nov 29 06:20:27 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 4.8 deep-scrub ok
Nov 29 06:20:27 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v142: 177 pgs: 95 peering, 31 unknown, 51 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:20:29 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 06:20:29 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e41 e41: 3 total, 3 up, 3 in
Nov 29 06:20:29 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:20:29 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e41: 3 total, 3 up, 3 in
Nov 29 06:20:29 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 06:20:29 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 41 pg[6.f( empty local-lis/les=39/41 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=39) [1] r=0 lpr=39 pi=[20,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:20:29 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 41 pg[6.1( empty local-lis/les=39/41 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=39) [1] r=0 lpr=39 pi=[20,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:20:29 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 41 pg[6.7( empty local-lis/les=39/41 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=39) [1] r=0 lpr=39 pi=[20,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:20:29 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 41 pg[6.d( empty local-lis/les=39/41 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=39) [1] r=0 lpr=39 pi=[20,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:20:29 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 41 pg[6.4( empty local-lis/les=39/41 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=39) [1] r=0 lpr=39 pi=[20,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:20:29 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 41 pg[6.6( empty local-lis/les=39/41 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=39) [1] r=0 lpr=39 pi=[20,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:20:29 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 41 pg[6.5( empty local-lis/les=39/41 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=39) [1] r=0 lpr=39 pi=[20,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:20:29 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 41 pg[6.3( empty local-lis/les=39/41 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=39) [1] r=0 lpr=39 pi=[20,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:20:29 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 41 pg[6.0( empty local-lis/les=39/41 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=39) [1] r=0 lpr=39 pi=[20,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:20:29 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 41 pg[6.8( empty local-lis/les=39/41 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=39) [1] r=0 lpr=39 pi=[20,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:20:29 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 41 pg[6.2( empty local-lis/les=39/41 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=39) [1] r=0 lpr=39 pi=[20,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:20:29 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 41 pg[6.9( empty local-lis/les=39/41 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=39) [1] r=0 lpr=39 pi=[20,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:20:29 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 41 pg[6.e( empty local-lis/les=39/41 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=39) [1] r=0 lpr=39 pi=[20,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:20:29 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 41 pg[6.a( empty local-lis/les=39/41 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=39) [1] r=0 lpr=39 pi=[20,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:20:29 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 41 pg[6.b( empty local-lis/les=39/41 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=39) [1] r=0 lpr=39 pi=[20,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:20:29 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 41 pg[6.c( empty local-lis/les=39/41 n=0 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=39) [1] r=0 lpr=39 pi=[20,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:20:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 06:20:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 06:20:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 06:20:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 06:20:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 06:20:29 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:20:29 compute-0 ceph-mon[74654]: pgmap v137: 177 pgs: 108 unknown, 69 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Nov 29 06:20:29 compute-0 ceph-mon[74654]: 4.7 scrub starts
Nov 29 06:20:29 compute-0 ceph-mon[74654]: 4.7 scrub ok
Nov 29 06:20:29 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:20:29 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v144: 177 pgs: 95 peering, 31 unknown, 51 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:20:30 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:20:30 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev eaf0f8c9-d8ab-4004-a696-5edb2077dc20 does not exist
Nov 29 06:20:30 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev ef8e146f-5a62-4e89-b1d5-d6820051da58 does not exist
Nov 29 06:20:30 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev 07963277-4d37-4e82-ae54-b1888f5688ae does not exist
Nov 29 06:20:30 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 06:20:30 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 06:20:30 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 06:20:30 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 06:20:30 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 06:20:30 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:20:30 compute-0 sudo[92588]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:20:30 compute-0 sudo[92588]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:20:30 compute-0 sudo[92588]: pam_unix(sudo:session): session closed for user root
Nov 29 06:20:30 compute-0 sudo[92613]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:20:30 compute-0 sudo[92613]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:20:30 compute-0 sudo[92613]: pam_unix(sudo:session): session closed for user root
Nov 29 06:20:30 compute-0 sudo[92638]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:20:30 compute-0 sudo[92638]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:20:30 compute-0 sudo[92638]: pam_unix(sudo:session): session closed for user root
Nov 29 06:20:30 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 4.9 deep-scrub starts
Nov 29 06:20:30 compute-0 sudo[92663]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Nov 29 06:20:30 compute-0 sudo[92663]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:20:30 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 4.9 deep-scrub ok
Nov 29 06:20:30 compute-0 podman[92728]: 2025-11-29 06:20:30.72867326 +0000 UTC m=+0.022319771 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:20:30 compute-0 podman[92728]: 2025-11-29 06:20:30.926812388 +0000 UTC m=+0.220458849 container create db1969c77c261e98716eb001ca8e0c66f0f695756228786ea7a695f2c23f1b86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_murdock, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 06:20:31 compute-0 systemd[1]: Started libpod-conmon-db1969c77c261e98716eb001ca8e0c66f0f695756228786ea7a695f2c23f1b86.scope.
Nov 29 06:20:31 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:20:31 compute-0 podman[92728]: 2025-11-29 06:20:31.380264767 +0000 UTC m=+0.673911228 container init db1969c77c261e98716eb001ca8e0c66f0f695756228786ea7a695f2c23f1b86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_murdock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 06:20:31 compute-0 podman[92728]: 2025-11-29 06:20:31.396736855 +0000 UTC m=+0.690383316 container start db1969c77c261e98716eb001ca8e0c66f0f695756228786ea7a695f2c23f1b86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_murdock, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 06:20:31 compute-0 affectionate_murdock[92744]: 167 167
Nov 29 06:20:31 compute-0 systemd[1]: libpod-db1969c77c261e98716eb001ca8e0c66f0f695756228786ea7a695f2c23f1b86.scope: Deactivated successfully.
Nov 29 06:20:31 compute-0 ceph-mon[74654]: pgmap v138: 177 pgs: 7 peering, 108 unknown, 62 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:20:31 compute-0 ceph-mon[74654]: pgmap v139: 177 pgs: 24 peering, 93 unknown, 60 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:20:31 compute-0 ceph-mon[74654]: pgmap v140: 177 pgs: 95 peering, 31 unknown, 51 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:20:31 compute-0 ceph-mon[74654]: pgmap v141: 177 pgs: 95 peering, 31 unknown, 51 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:20:31 compute-0 ceph-mon[74654]: 4.8 deep-scrub starts
Nov 29 06:20:31 compute-0 ceph-mon[74654]: 4.8 deep-scrub ok
Nov 29 06:20:31 compute-0 ceph-mon[74654]: pgmap v142: 177 pgs: 95 peering, 31 unknown, 51 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:20:31 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 06:20:31 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:20:31 compute-0 ceph-mon[74654]: osdmap e41: 3 total, 3 up, 3 in
Nov 29 06:20:31 compute-0 ceph-mon[74654]: pgmap v144: 177 pgs: 95 peering, 31 unknown, 51 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:20:31 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:20:31 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 06:20:31 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 06:20:31 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:20:31 compute-0 podman[92728]: 2025-11-29 06:20:31.827388437 +0000 UTC m=+1.121034908 container attach db1969c77c261e98716eb001ca8e0c66f0f695756228786ea7a695f2c23f1b86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_murdock, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 29 06:20:31 compute-0 podman[92728]: 2025-11-29 06:20:31.828403578 +0000 UTC m=+1.122050059 container died db1969c77c261e98716eb001ca8e0c66f0f695756228786ea7a695f2c23f1b86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_murdock, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 29 06:20:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-84adc087714af5b45bf20e8b466af974b82463bec579500c8de68f180e660495-merged.mount: Deactivated successfully.
Nov 29 06:20:31 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v145: 177 pgs: 78 peering, 99 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:20:32 compute-0 podman[92728]: 2025-11-29 06:20:32.002842414 +0000 UTC m=+1.296488875 container remove db1969c77c261e98716eb001ca8e0c66f0f695756228786ea7a695f2c23f1b86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_murdock, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 29 06:20:32 compute-0 systemd[1]: libpod-conmon-db1969c77c261e98716eb001ca8e0c66f0f695756228786ea7a695f2c23f1b86.scope: Deactivated successfully.
Nov 29 06:20:32 compute-0 podman[92767]: 2025-11-29 06:20:32.245994524 +0000 UTC m=+0.112461051 container create 6abdba51dbc5e843b66025b6ffbdc199bc307945bf5d0fee311f7a1d3b19bf2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_borg, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 06:20:32 compute-0 podman[92767]: 2025-11-29 06:20:32.170679514 +0000 UTC m=+0.037146091 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:20:32 compute-0 systemd[1]: Started libpod-conmon-6abdba51dbc5e843b66025b6ffbdc199bc307945bf5d0fee311f7a1d3b19bf2e.scope.
Nov 29 06:20:32 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:20:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f7a79dfbff13448a63edc0886ec9b61d5b9895b9f493fb2c1b95ff3b162b56f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 06:20:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f7a79dfbff13448a63edc0886ec9b61d5b9895b9f493fb2c1b95ff3b162b56f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:20:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f7a79dfbff13448a63edc0886ec9b61d5b9895b9f493fb2c1b95ff3b162b56f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:20:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f7a79dfbff13448a63edc0886ec9b61d5b9895b9f493fb2c1b95ff3b162b56f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 06:20:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f7a79dfbff13448a63edc0886ec9b61d5b9895b9f493fb2c1b95ff3b162b56f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 06:20:32 compute-0 podman[92767]: 2025-11-29 06:20:32.369060039 +0000 UTC m=+0.235526576 container init 6abdba51dbc5e843b66025b6ffbdc199bc307945bf5d0fee311f7a1d3b19bf2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_borg, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 06:20:32 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 4.a deep-scrub starts
Nov 29 06:20:32 compute-0 podman[92767]: 2025-11-29 06:20:32.380807027 +0000 UTC m=+0.247273524 container start 6abdba51dbc5e843b66025b6ffbdc199bc307945bf5d0fee311f7a1d3b19bf2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_borg, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 06:20:32 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 4.a deep-scrub ok
Nov 29 06:20:32 compute-0 podman[92767]: 2025-11-29 06:20:32.392083601 +0000 UTC m=+0.258550098 container attach 6abdba51dbc5e843b66025b6ffbdc199bc307945bf5d0fee311f7a1d3b19bf2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_borg, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 06:20:32 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e41 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 06:20:32 compute-0 ceph-mon[74654]: 4.9 deep-scrub starts
Nov 29 06:20:32 compute-0 ceph-mon[74654]: 4.9 deep-scrub ok
Nov 29 06:20:32 compute-0 ceph-mon[74654]: 5.c deep-scrub starts
Nov 29 06:20:32 compute-0 ceph-mon[74654]: 5.c deep-scrub ok
Nov 29 06:20:33 compute-0 zen_borg[92783]: --> passed data devices: 0 physical, 1 LVM
Nov 29 06:20:33 compute-0 zen_borg[92783]: --> relative data size: 1.0
Nov 29 06:20:33 compute-0 zen_borg[92783]: --> All data devices are unavailable
Nov 29 06:20:33 compute-0 systemd[1]: libpod-6abdba51dbc5e843b66025b6ffbdc199bc307945bf5d0fee311f7a1d3b19bf2e.scope: Deactivated successfully.
Nov 29 06:20:33 compute-0 podman[92767]: 2025-11-29 06:20:33.189091354 +0000 UTC m=+1.055557851 container died 6abdba51dbc5e843b66025b6ffbdc199bc307945bf5d0fee311f7a1d3b19bf2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_borg, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 06:20:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-6f7a79dfbff13448a63edc0886ec9b61d5b9895b9f493fb2c1b95ff3b162b56f-merged.mount: Deactivated successfully.
Nov 29 06:20:33 compute-0 podman[92767]: 2025-11-29 06:20:33.260140578 +0000 UTC m=+1.126607145 container remove 6abdba51dbc5e843b66025b6ffbdc199bc307945bf5d0fee311f7a1d3b19bf2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_borg, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 06:20:33 compute-0 systemd[1]: libpod-conmon-6abdba51dbc5e843b66025b6ffbdc199bc307945bf5d0fee311f7a1d3b19bf2e.scope: Deactivated successfully.
Nov 29 06:20:33 compute-0 sudo[92663]: pam_unix(sudo:session): session closed for user root
Nov 29 06:20:33 compute-0 sudo[92812]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:20:33 compute-0 sudo[92812]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:20:33 compute-0 sudo[92812]: pam_unix(sudo:session): session closed for user root
Nov 29 06:20:33 compute-0 sudo[92837]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:20:33 compute-0 sudo[92837]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:20:33 compute-0 sudo[92837]: pam_unix(sudo:session): session closed for user root
Nov 29 06:20:33 compute-0 sudo[92862]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:20:33 compute-0 sudo[92862]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:20:33 compute-0 sudo[92862]: pam_unix(sudo:session): session closed for user root
Nov 29 06:20:33 compute-0 sudo[92887]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -- lvm list --format json
Nov 29 06:20:33 compute-0 sudo[92887]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:20:33 compute-0 sshd-session[92788]: Invalid user alex from 31.6.212.12 port 47392
Nov 29 06:20:33 compute-0 sshd-session[92788]: Received disconnect from 31.6.212.12 port 47392:11: Bye Bye [preauth]
Nov 29 06:20:33 compute-0 sshd-session[92788]: Disconnected from invalid user alex 31.6.212.12 port 47392 [preauth]
Nov 29 06:20:33 compute-0 podman[92950]: 2025-11-29 06:20:33.890504015 +0000 UTC m=+0.066678061 container create 3fe7cb5f8a49deeed9689fd78a37aec4ba35d1007b1cb0546108c9e47fadde4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_euclid, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 29 06:20:33 compute-0 systemd[1]: Started libpod-conmon-3fe7cb5f8a49deeed9689fd78a37aec4ba35d1007b1cb0546108c9e47fadde4f.scope.
Nov 29 06:20:33 compute-0 podman[92950]: 2025-11-29 06:20:33.849725051 +0000 UTC m=+0.025899147 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:20:33 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:20:33 compute-0 podman[92950]: 2025-11-29 06:20:33.969672824 +0000 UTC m=+0.145846870 container init 3fe7cb5f8a49deeed9689fd78a37aec4ba35d1007b1cb0546108c9e47fadde4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_euclid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 29 06:20:33 compute-0 podman[92950]: 2025-11-29 06:20:33.976346317 +0000 UTC m=+0.152520363 container start 3fe7cb5f8a49deeed9689fd78a37aec4ba35d1007b1cb0546108c9e47fadde4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_euclid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 29 06:20:33 compute-0 frosty_euclid[92968]: 167 167
Nov 29 06:20:33 compute-0 systemd[1]: libpod-3fe7cb5f8a49deeed9689fd78a37aec4ba35d1007b1cb0546108c9e47fadde4f.scope: Deactivated successfully.
Nov 29 06:20:33 compute-0 podman[92950]: 2025-11-29 06:20:33.979515028 +0000 UTC m=+0.155689104 container attach 3fe7cb5f8a49deeed9689fd78a37aec4ba35d1007b1cb0546108c9e47fadde4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_euclid, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True)
Nov 29 06:20:33 compute-0 podman[92950]: 2025-11-29 06:20:33.981758672 +0000 UTC m=+0.157932718 container died 3fe7cb5f8a49deeed9689fd78a37aec4ba35d1007b1cb0546108c9e47fadde4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_euclid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 29 06:20:33 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v146: 177 pgs: 177 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:20:33 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 29 06:20:33 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 06:20:33 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 29 06:20:33 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 06:20:33 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"} v 0) v1
Nov 29 06:20:33 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]: dispatch
Nov 29 06:20:33 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 29 06:20:33 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 06:20:33 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 29 06:20:33 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 06:20:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-6c3c5c6400dce5fda5be41cbf4e046e0fa61a6f2d5c933e8b45a493fd2d5ce2f-merged.mount: Deactivated successfully.
Nov 29 06:20:34 compute-0 podman[92950]: 2025-11-29 06:20:34.018768738 +0000 UTC m=+0.194942784 container remove 3fe7cb5f8a49deeed9689fd78a37aec4ba35d1007b1cb0546108c9e47fadde4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_euclid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 06:20:34 compute-0 systemd[1]: libpod-conmon-3fe7cb5f8a49deeed9689fd78a37aec4ba35d1007b1cb0546108c9e47fadde4f.scope: Deactivated successfully.
Nov 29 06:20:34 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e41 do_prune osdmap full prune enabled
Nov 29 06:20:34 compute-0 podman[92991]: 2025-11-29 06:20:34.191681876 +0000 UTC m=+0.049220158 container create e7b4a3a3304f46a35b72d7b9d75aecbac93f5648e89eba2c888ef9b799670a5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_davinci, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 06:20:34 compute-0 ceph-mon[74654]: pgmap v145: 177 pgs: 78 peering, 99 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:20:34 compute-0 ceph-mon[74654]: 4.a deep-scrub starts
Nov 29 06:20:34 compute-0 ceph-mon[74654]: 4.a deep-scrub ok
Nov 29 06:20:34 compute-0 ceph-mon[74654]: 5.2 scrub starts
Nov 29 06:20:34 compute-0 ceph-mon[74654]: 5.2 scrub ok
Nov 29 06:20:34 compute-0 systemd[1]: Started libpod-conmon-e7b4a3a3304f46a35b72d7b9d75aecbac93f5648e89eba2c888ef9b799670a5e.scope.
Nov 29 06:20:34 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:20:34 compute-0 podman[92991]: 2025-11-29 06:20:34.165475522 +0000 UTC m=+0.023013834 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:20:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56e830be9267e2f35f51f918336748e6606ad77a7ef067424a91acac31ab50f2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 06:20:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56e830be9267e2f35f51f918336748e6606ad77a7ef067424a91acac31ab50f2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:20:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56e830be9267e2f35f51f918336748e6606ad77a7ef067424a91acac31ab50f2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:20:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56e830be9267e2f35f51f918336748e6606ad77a7ef067424a91acac31ab50f2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 06:20:34 compute-0 podman[92991]: 2025-11-29 06:20:34.278581018 +0000 UTC m=+0.136119320 container init e7b4a3a3304f46a35b72d7b9d75aecbac93f5648e89eba2c888ef9b799670a5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_davinci, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 29 06:20:34 compute-0 podman[92991]: 2025-11-29 06:20:34.285018624 +0000 UTC m=+0.142556906 container start e7b4a3a3304f46a35b72d7b9d75aecbac93f5648e89eba2c888ef9b799670a5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_davinci, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 06:20:34 compute-0 podman[92991]: 2025-11-29 06:20:34.288829683 +0000 UTC m=+0.146367995 container attach e7b4a3a3304f46a35b72d7b9d75aecbac93f5648e89eba2c888ef9b799670a5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_davinci, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 29 06:20:35 compute-0 practical_davinci[93007]: {
Nov 29 06:20:35 compute-0 practical_davinci[93007]:     "1": [
Nov 29 06:20:35 compute-0 practical_davinci[93007]:         {
Nov 29 06:20:35 compute-0 practical_davinci[93007]:             "devices": [
Nov 29 06:20:35 compute-0 practical_davinci[93007]:                 "/dev/loop3"
Nov 29 06:20:35 compute-0 practical_davinci[93007]:             ],
Nov 29 06:20:35 compute-0 practical_davinci[93007]:             "lv_name": "ceph_lv0",
Nov 29 06:20:35 compute-0 practical_davinci[93007]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 06:20:35 compute-0 practical_davinci[93007]:             "lv_size": "7511998464",
Nov 29 06:20:35 compute-0 practical_davinci[93007]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=336ec58c-893b-528f-a0c1-6ed1196bc047,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=91f280f1-e534-4adc-bf70-98711580c2dd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 06:20:35 compute-0 practical_davinci[93007]:             "lv_uuid": "G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP",
Nov 29 06:20:35 compute-0 practical_davinci[93007]:             "name": "ceph_lv0",
Nov 29 06:20:35 compute-0 practical_davinci[93007]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 06:20:35 compute-0 practical_davinci[93007]:             "tags": {
Nov 29 06:20:35 compute-0 practical_davinci[93007]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 06:20:35 compute-0 practical_davinci[93007]:                 "ceph.block_uuid": "G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP",
Nov 29 06:20:35 compute-0 practical_davinci[93007]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 06:20:35 compute-0 practical_davinci[93007]:                 "ceph.cluster_fsid": "336ec58c-893b-528f-a0c1-6ed1196bc047",
Nov 29 06:20:35 compute-0 practical_davinci[93007]:                 "ceph.cluster_name": "ceph",
Nov 29 06:20:35 compute-0 practical_davinci[93007]:                 "ceph.crush_device_class": "",
Nov 29 06:20:35 compute-0 practical_davinci[93007]:                 "ceph.encrypted": "0",
Nov 29 06:20:35 compute-0 practical_davinci[93007]:                 "ceph.osd_fsid": "91f280f1-e534-4adc-bf70-98711580c2dd",
Nov 29 06:20:35 compute-0 practical_davinci[93007]:                 "ceph.osd_id": "1",
Nov 29 06:20:35 compute-0 practical_davinci[93007]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 06:20:35 compute-0 practical_davinci[93007]:                 "ceph.type": "block",
Nov 29 06:20:35 compute-0 practical_davinci[93007]:                 "ceph.vdo": "0"
Nov 29 06:20:35 compute-0 practical_davinci[93007]:             },
Nov 29 06:20:35 compute-0 practical_davinci[93007]:             "type": "block",
Nov 29 06:20:35 compute-0 practical_davinci[93007]:             "vg_name": "ceph_vg0"
Nov 29 06:20:35 compute-0 practical_davinci[93007]:         }
Nov 29 06:20:35 compute-0 practical_davinci[93007]:     ]
Nov 29 06:20:35 compute-0 practical_davinci[93007]: }
Nov 29 06:20:35 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 06:20:35 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 06:20:35 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Nov 29 06:20:35 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 06:20:35 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 06:20:35 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e42 e42: 3 total, 3 up, 3 in
Nov 29 06:20:35 compute-0 systemd[1]: libpod-e7b4a3a3304f46a35b72d7b9d75aecbac93f5648e89eba2c888ef9b799670a5e.scope: Deactivated successfully.
Nov 29 06:20:35 compute-0 podman[92991]: 2025-11-29 06:20:35.063773814 +0000 UTC m=+0.921312096 container died e7b4a3a3304f46a35b72d7b9d75aecbac93f5648e89eba2c888ef9b799670a5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_davinci, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 06:20:35 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e42: 3 total, 3 up, 3 in
Nov 29 06:20:35 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[4.d( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=9.011550903s) [0] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active pruub 128.675140381s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:20:35 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[6.1( empty local-lis/les=39/41 n=0 ec=39/20 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=10.110986710s) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active pruub 129.775024414s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:20:35 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[6.1( empty local-lis/les=39/41 n=0 ec=39/20 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=10.110947609s) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 129.775024414s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:35 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[4.e( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=9.011586189s) [0] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active pruub 128.675216675s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:20:35 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[4.d( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=9.010724068s) [0] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 128.675140381s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:35 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[4.e( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=9.010685921s) [0] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 128.675216675s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:35 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[4.3( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=9.010158539s) [2] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active pruub 128.675186157s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:20:35 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[6.7( empty local-lis/les=39/41 n=0 ec=39/20 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=10.109983444s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active pruub 129.775039673s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:20:35 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[4.3( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=9.010080338s) [2] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 128.675186157s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:35 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[6.7( empty local-lis/les=39/41 n=0 ec=39/20 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=10.109938622s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 129.775039673s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:35 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[4.6( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=9.010134697s) [2] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active pruub 128.675292969s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:20:35 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[4.6( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=9.010087013s) [2] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 128.675292969s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:35 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[4.1d( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=9.010012627s) [2] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active pruub 128.675369263s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:20:35 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[6.d( empty local-lis/les=39/41 n=0 ec=39/20 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=10.109688759s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active pruub 129.775054932s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:20:35 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[4.1f( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=9.010006905s) [2] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active pruub 128.675384521s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:20:35 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[6.d( empty local-lis/les=39/41 n=0 ec=39/20 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=10.109664917s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 129.775054932s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:35 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[4.1d( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=9.009965897s) [2] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 128.675369263s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:35 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[4.1f( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=9.009955406s) [2] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 128.675384521s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:35 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[4.13( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=9.009961128s) [0] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active pruub 128.675552368s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:20:35 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[4.13( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=9.009877205s) [0] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 128.675552368s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:35 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[4.5( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=9.010269165s) [0] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active pruub 128.675262451s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:20:35 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[4.15( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=9.009781837s) [2] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active pruub 128.675582886s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:20:35 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[4.15( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=9.009729385s) [2] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 128.675582886s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:35 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[4.19( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=9.009824753s) [2] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active pruub 128.675582886s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:20:35 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[6.5( empty local-lis/les=39/41 n=0 ec=39/20 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=10.109353065s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active pruub 129.775207520s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:20:35 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[6.5( empty local-lis/les=39/41 n=0 ec=39/20 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=10.109261513s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 129.775207520s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:35 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[4.5( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=9.009334564s) [0] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 128.675262451s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:35 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[6.3( empty local-lis/les=39/41 n=0 ec=39/20 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=10.109200478s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active pruub 129.775253296s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:20:35 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[6.2( empty local-lis/les=39/41 n=0 ec=39/20 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=10.109324455s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active pruub 129.775314331s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:20:35 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[4.19( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=9.009631157s) [2] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 128.675582886s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:35 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[6.3( empty local-lis/les=39/41 n=0 ec=39/20 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=10.109173775s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 129.775253296s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:35 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[6.2( empty local-lis/les=39/41 n=0 ec=39/20 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=10.109218597s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 129.775314331s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:35 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[4.1( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=9.009421349s) [2] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active pruub 128.675613403s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:20:35 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[4.1( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=9.009397507s) [2] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 128.675613403s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:35 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[4.2( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=9.009315491s) [2] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active pruub 128.675613403s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:20:35 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[4.2( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=9.009278297s) [2] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 128.675613403s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:35 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[4.18( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=9.009585381s) [0] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active pruub 128.675949097s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:20:35 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[6.8( empty local-lis/les=39/41 n=0 ec=39/20 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=10.108906746s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active pruub 129.775283813s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:20:35 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[4.18( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=9.009565353s) [0] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 128.675949097s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:35 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[4.a( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=9.009493828s) [0] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active pruub 128.675933838s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:20:35 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[6.8( empty local-lis/les=39/41 n=0 ec=39/20 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=10.108855247s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 129.775283813s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:35 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[6.e( empty local-lis/les=39/41 n=0 ec=39/20 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=10.108835220s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active pruub 129.775375366s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:20:35 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[4.a( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=9.009438515s) [0] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 128.675933838s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:35 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[6.e( empty local-lis/les=39/41 n=0 ec=39/20 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=10.108816147s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 129.775375366s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:35 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[4.c( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=9.009423256s) [0] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active pruub 128.675994873s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:20:35 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[4.c( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=9.009394646s) [0] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 128.675994873s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:35 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[6.a( empty local-lis/les=39/41 n=0 ec=39/20 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=10.108706474s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active pruub 129.775405884s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:20:35 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[4.8( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=9.009249687s) [2] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active pruub 128.676055908s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:20:35 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[6.a( empty local-lis/les=39/41 n=0 ec=39/20 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=10.108654976s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 129.775405884s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:35 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[4.8( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=9.009185791s) [2] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 128.676055908s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:35 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[4.14( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=9.009135246s) [2] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active pruub 128.676010132s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:20:35 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[4.9( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=9.009145737s) [2] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active pruub 128.676025391s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:20:35 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[4.14( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=9.009089470s) [2] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 128.676010132s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:35 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[4.9( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=9.009093285s) [2] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 128.676025391s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:35 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[4.1c( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=9.009002686s) [2] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active pruub 128.676071167s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:20:35 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[4.1b( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=9.008987427s) [0] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active pruub 128.676071167s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:20:35 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[4.1c( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=9.008976936s) [2] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 128.676071167s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:35 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[4.1a( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=9.009074211s) [0] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active pruub 128.676071167s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:20:35 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[4.1b( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=9.008937836s) [0] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 128.676071167s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:35 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[4.1a( empty local-lis/les=36/37 n=0 ec=36/16 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=9.008885384s) [0] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 128.676071167s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 06:20:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-56e830be9267e2f35f51f918336748e6606ad77a7ef067424a91acac31ab50f2-merged.mount: Deactivated successfully.
Nov 29 06:20:35 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[7.13( empty local-lis/les=0/0 n=0 ec=40/21 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:20:35 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[7.6( empty local-lis/les=0/0 n=0 ec=40/21 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:20:35 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[7.3( empty local-lis/les=0/0 n=0 ec=40/21 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:20:35 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[7.18( empty local-lis/les=0/0 n=0 ec=40/21 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:20:35 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[7.4( empty local-lis/les=0/0 n=0 ec=40/21 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:20:35 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[7.e( empty local-lis/les=0/0 n=0 ec=40/21 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:20:35 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[7.9( empty local-lis/les=0/0 n=0 ec=40/21 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:20:35 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[7.b( empty local-lis/les=0/0 n=0 ec=40/21 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:20:35 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[7.10( empty local-lis/les=0/0 n=0 ec=40/21 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:20:35 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[7.f( empty local-lis/les=0/0 n=0 ec=40/21 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:20:35 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[7.8( empty local-lis/les=0/0 n=0 ec=40/21 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:20:35 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[7.1e( empty local-lis/les=0/0 n=0 ec=40/21 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:20:35 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[7.1b( empty local-lis/les=0/0 n=0 ec=40/21 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:20:35 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[7.2( empty local-lis/les=0/0 n=0 ec=40/21 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:20:35 compute-0 podman[92991]: 2025-11-29 06:20:35.337185026 +0000 UTC m=+1.194723348 container remove e7b4a3a3304f46a35b72d7b9d75aecbac93f5648e89eba2c888ef9b799670a5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_davinci, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 06:20:35 compute-0 systemd[1]: libpod-conmon-e7b4a3a3304f46a35b72d7b9d75aecbac93f5648e89eba2c888ef9b799670a5e.scope: Deactivated successfully.
Nov 29 06:20:35 compute-0 sudo[92887]: pam_unix(sudo:session): session closed for user root
Nov 29 06:20:35 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 4.b scrub starts
Nov 29 06:20:35 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 4.b scrub ok
Nov 29 06:20:35 compute-0 sudo[93028]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:20:35 compute-0 sudo[93028]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:20:35 compute-0 sudo[93028]: pam_unix(sudo:session): session closed for user root
Nov 29 06:20:35 compute-0 sudo[93053]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:20:35 compute-0 sudo[93053]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:20:35 compute-0 sudo[93053]: pam_unix(sudo:session): session closed for user root
Nov 29 06:20:35 compute-0 sudo[93078]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:20:35 compute-0 sudo[93078]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:20:35 compute-0 sudo[93078]: pam_unix(sudo:session): session closed for user root
Nov 29 06:20:35 compute-0 sudo[93103]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -- raw list --format json
Nov 29 06:20:35 compute-0 sudo[93103]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:20:35 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[5.1d( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:20:35 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[3.17( empty local-lis/les=0/0 n=0 ec=36/14 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:20:35 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[3.12( empty local-lis/les=0/0 n=0 ec=36/14 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:20:35 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[5.14( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:20:35 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[3.18( empty local-lis/les=0/0 n=0 ec=36/14 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:20:35 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[5.17( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:20:35 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[3.1( empty local-lis/les=0/0 n=0 ec=36/14 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:20:35 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[5.1e( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:20:35 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[3.4( empty local-lis/les=0/0 n=0 ec=36/14 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:20:35 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[3.2( empty local-lis/les=0/0 n=0 ec=36/14 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:20:35 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[3.19( empty local-lis/les=0/0 n=0 ec=36/14 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:20:35 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[5.5( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:20:35 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[5.c( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:20:35 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[3.1e( empty local-lis/les=0/0 n=0 ec=36/14 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:20:35 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[5.6( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:20:35 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[3.7( empty local-lis/les=0/0 n=0 ec=36/14 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:20:35 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[3.6( empty local-lis/les=0/0 n=0 ec=36/14 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:20:35 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[5.3( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:20:35 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[3.1f( empty local-lis/les=0/0 n=0 ec=36/14 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:20:35 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[5.19( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:20:35 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[5.a( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:20:35 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 42 pg[3.b( empty local-lis/les=0/0 n=0 ec=36/14 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:20:35 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v148: 177 pgs: 14 peering, 163 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:20:36 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e42 do_prune osdmap full prune enabled
Nov 29 06:20:36 compute-0 podman[93169]: 2025-11-29 06:20:36.017510303 +0000 UTC m=+0.024005232 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:20:36 compute-0 ceph-mon[74654]: pgmap v146: 177 pgs: 177 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:20:36 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 06:20:36 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 06:20:36 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]: dispatch
Nov 29 06:20:36 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 06:20:36 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 06:20:36 compute-0 ceph-mon[74654]: 7.1 scrub starts
Nov 29 06:20:36 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 06:20:36 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 06:20:36 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Nov 29 06:20:36 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 06:20:36 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 06:20:36 compute-0 ceph-mon[74654]: osdmap e42: 3 total, 3 up, 3 in
Nov 29 06:20:36 compute-0 podman[93169]: 2025-11-29 06:20:36.279422454 +0000 UTC m=+0.285917383 container create 8e8f45794574911dcfb8be7046cfd07457c95d608b71e54ba6631f6668926a07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_merkle, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 29 06:20:36 compute-0 systemd[1]: Started libpod-conmon-8e8f45794574911dcfb8be7046cfd07457c95d608b71e54ba6631f6668926a07.scope.
Nov 29 06:20:36 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:20:36 compute-0 podman[93169]: 2025-11-29 06:20:36.637791132 +0000 UTC m=+0.644286051 container init 8e8f45794574911dcfb8be7046cfd07457c95d608b71e54ba6631f6668926a07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_merkle, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 06:20:36 compute-0 podman[93169]: 2025-11-29 06:20:36.644527976 +0000 UTC m=+0.651022865 container start 8e8f45794574911dcfb8be7046cfd07457c95d608b71e54ba6631f6668926a07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_merkle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 06:20:36 compute-0 happy_merkle[93185]: 167 167
Nov 29 06:20:36 compute-0 systemd[1]: libpod-8e8f45794574911dcfb8be7046cfd07457c95d608b71e54ba6631f6668926a07.scope: Deactivated successfully.
Nov 29 06:20:36 compute-0 podman[93169]: 2025-11-29 06:20:36.649275233 +0000 UTC m=+0.655770142 container attach 8e8f45794574911dcfb8be7046cfd07457c95d608b71e54ba6631f6668926a07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_merkle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 06:20:36 compute-0 podman[93169]: 2025-11-29 06:20:36.650185099 +0000 UTC m=+0.656679988 container died 8e8f45794574911dcfb8be7046cfd07457c95d608b71e54ba6631f6668926a07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_merkle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 06:20:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-abf0fb127359f941f25557de3092731ae4976734341faa206ce7dd392f0a3941-merged.mount: Deactivated successfully.
Nov 29 06:20:36 compute-0 podman[93169]: 2025-11-29 06:20:36.683073296 +0000 UTC m=+0.689568185 container remove 8e8f45794574911dcfb8be7046cfd07457c95d608b71e54ba6631f6668926a07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_merkle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 06:20:36 compute-0 systemd[1]: libpod-conmon-8e8f45794574911dcfb8be7046cfd07457c95d608b71e54ba6631f6668926a07.scope: Deactivated successfully.
Nov 29 06:20:36 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e43 e43: 3 total, 3 up, 3 in
Nov 29 06:20:36 compute-0 podman[93209]: 2025-11-29 06:20:36.826476295 +0000 UTC m=+0.045893413 container create ba6c042fa54b03b956c4a001f447ec3e69392ef4146cffa47bc7a6f65adedaad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_gould, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 29 06:20:36 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e43: 3 total, 3 up, 3 in
Nov 29 06:20:36 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 43 pg[5.c( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:20:36 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 43 pg[3.4( empty local-lis/les=42/43 n=0 ec=36/14 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:20:36 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 43 pg[3.2( empty local-lis/les=42/43 n=0 ec=36/14 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:20:36 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 43 pg[7.6( empty local-lis/les=42/43 n=0 ec=40/21 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:20:36 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 43 pg[3.1( empty local-lis/les=42/43 n=0 ec=36/14 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:20:36 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 43 pg[5.1e( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:20:36 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 43 pg[7.e( empty local-lis/les=42/43 n=0 ec=40/21 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:20:36 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 43 pg[7.1e( empty local-lis/les=42/43 n=0 ec=40/21 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:20:36 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 43 pg[7.10( empty local-lis/les=42/43 n=0 ec=40/21 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:20:36 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 43 pg[5.5( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:20:36 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 43 pg[3.18( empty local-lis/les=42/43 n=0 ec=36/14 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:20:36 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 43 pg[5.17( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:20:36 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 43 pg[3.1e( empty local-lis/les=42/43 n=0 ec=36/14 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:20:36 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 43 pg[3.12( empty local-lis/les=42/43 n=0 ec=36/14 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:20:36 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 43 pg[3.7( empty local-lis/les=42/43 n=0 ec=36/14 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:20:36 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 43 pg[7.3( empty local-lis/les=42/43 n=0 ec=40/21 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:20:36 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 43 pg[5.14( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:20:36 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 43 pg[7.2( empty local-lis/les=42/43 n=0 ec=40/21 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:20:36 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 43 pg[7.4( empty local-lis/les=42/43 n=0 ec=40/21 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:20:36 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 43 pg[3.6( empty local-lis/les=42/43 n=0 ec=36/14 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:20:36 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 43 pg[5.3( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:20:36 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 43 pg[7.9( empty local-lis/les=42/43 n=0 ec=40/21 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:20:36 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 43 pg[5.19( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:20:36 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 43 pg[3.1f( empty local-lis/les=42/43 n=0 ec=36/14 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:20:36 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 43 pg[5.a( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:20:36 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 43 pg[7.1b( empty local-lis/les=42/43 n=0 ec=40/21 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:20:36 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 43 pg[7.8( empty local-lis/les=42/43 n=0 ec=40/21 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:20:36 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 43 pg[3.b( empty local-lis/les=42/43 n=0 ec=36/14 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:20:36 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 43 pg[3.17( empty local-lis/les=42/43 n=0 ec=36/14 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:20:36 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 43 pg[7.b( empty local-lis/les=42/43 n=0 ec=40/21 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:20:36 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 43 pg[7.f( empty local-lis/les=42/43 n=0 ec=40/21 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:20:36 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 43 pg[3.19( empty local-lis/les=42/43 n=0 ec=36/14 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:20:36 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 43 pg[5.6( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:20:36 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 43 pg[5.1d( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:20:36 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 43 pg[7.18( empty local-lis/les=42/43 n=0 ec=40/21 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:20:36 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 43 pg[7.13( empty local-lis/les=42/43 n=0 ec=40/21 lis/c=40/40 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:20:36 compute-0 systemd[1]: Started libpod-conmon-ba6c042fa54b03b956c4a001f447ec3e69392ef4146cffa47bc7a6f65adedaad.scope.
Nov 29 06:20:36 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:20:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c932cfcfa3ed4f9cb911f67c594f6081d273bfb19a9105de53f1a3b875a97e34/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 06:20:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c932cfcfa3ed4f9cb911f67c594f6081d273bfb19a9105de53f1a3b875a97e34/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:20:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c932cfcfa3ed4f9cb911f67c594f6081d273bfb19a9105de53f1a3b875a97e34/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:20:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c932cfcfa3ed4f9cb911f67c594f6081d273bfb19a9105de53f1a3b875a97e34/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 06:20:36 compute-0 podman[93209]: 2025-11-29 06:20:36.801565157 +0000 UTC m=+0.020982275 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:20:36 compute-0 podman[93209]: 2025-11-29 06:20:36.93192368 +0000 UTC m=+0.151340808 container init ba6c042fa54b03b956c4a001f447ec3e69392ef4146cffa47bc7a6f65adedaad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_gould, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 29 06:20:36 compute-0 podman[93209]: 2025-11-29 06:20:36.938384276 +0000 UTC m=+0.157801364 container start ba6c042fa54b03b956c4a001f447ec3e69392ef4146cffa47bc7a6f65adedaad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_gould, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 29 06:20:36 compute-0 podman[93209]: 2025-11-29 06:20:36.955550891 +0000 UTC m=+0.174967989 container attach ba6c042fa54b03b956c4a001f447ec3e69392ef4146cffa47bc7a6f65adedaad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_gould, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 06:20:37 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 06:20:37 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 4.f scrub starts
Nov 29 06:20:37 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 4.f scrub ok
Nov 29 06:20:37 compute-0 ceph-mon[74654]: 7.1 scrub ok
Nov 29 06:20:37 compute-0 ceph-mon[74654]: 3.a deep-scrub starts
Nov 29 06:20:37 compute-0 ceph-mon[74654]: 3.a deep-scrub ok
Nov 29 06:20:37 compute-0 ceph-mon[74654]: 7.2 scrub starts
Nov 29 06:20:37 compute-0 ceph-mon[74654]: 4.b scrub starts
Nov 29 06:20:37 compute-0 ceph-mon[74654]: 4.b scrub ok
Nov 29 06:20:37 compute-0 ceph-mon[74654]: 5.4 scrub starts
Nov 29 06:20:37 compute-0 ceph-mon[74654]: 5.4 scrub ok
Nov 29 06:20:37 compute-0 ceph-mon[74654]: pgmap v148: 177 pgs: 14 peering, 163 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:20:37 compute-0 ceph-mon[74654]: 7.7 scrub starts
Nov 29 06:20:37 compute-0 ceph-mon[74654]: 7.7 scrub ok
Nov 29 06:20:37 compute-0 ceph-mon[74654]: osdmap e43: 3 total, 3 up, 3 in
Nov 29 06:20:37 compute-0 ecstatic_gould[93225]: {
Nov 29 06:20:37 compute-0 ecstatic_gould[93225]:     "91f280f1-e534-4adc-bf70-98711580c2dd": {
Nov 29 06:20:37 compute-0 ecstatic_gould[93225]:         "ceph_fsid": "336ec58c-893b-528f-a0c1-6ed1196bc047",
Nov 29 06:20:37 compute-0 ecstatic_gould[93225]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 06:20:37 compute-0 ecstatic_gould[93225]:         "osd_id": 1,
Nov 29 06:20:37 compute-0 ecstatic_gould[93225]:         "osd_uuid": "91f280f1-e534-4adc-bf70-98711580c2dd",
Nov 29 06:20:37 compute-0 ecstatic_gould[93225]:         "type": "bluestore"
Nov 29 06:20:37 compute-0 ecstatic_gould[93225]:     }
Nov 29 06:20:37 compute-0 ecstatic_gould[93225]: }
Nov 29 06:20:37 compute-0 systemd[1]: libpod-ba6c042fa54b03b956c4a001f447ec3e69392ef4146cffa47bc7a6f65adedaad.scope: Deactivated successfully.
Nov 29 06:20:37 compute-0 podman[93209]: 2025-11-29 06:20:37.75021946 +0000 UTC m=+0.969636538 container died ba6c042fa54b03b956c4a001f447ec3e69392ef4146cffa47bc7a6f65adedaad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_gould, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 06:20:37 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e43 do_prune osdmap full prune enabled
Nov 29 06:20:37 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v150: 177 pgs: 55 peering, 122 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:20:38 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e44 e44: 3 total, 3 up, 3 in
Nov 29 06:20:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-c932cfcfa3ed4f9cb911f67c594f6081d273bfb19a9105de53f1a3b875a97e34-merged.mount: Deactivated successfully.
Nov 29 06:20:38 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e44: 3 total, 3 up, 3 in
Nov 29 06:20:38 compute-0 ceph-mon[74654]: 4.f scrub starts
Nov 29 06:20:38 compute-0 ceph-mon[74654]: 4.f scrub ok
Nov 29 06:20:38 compute-0 ceph-mon[74654]: osdmap e44: 3 total, 3 up, 3 in
Nov 29 06:20:38 compute-0 podman[93209]: 2025-11-29 06:20:38.89225965 +0000 UTC m=+2.111676768 container remove ba6c042fa54b03b956c4a001f447ec3e69392ef4146cffa47bc7a6f65adedaad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_gould, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 06:20:38 compute-0 systemd[1]: libpod-conmon-ba6c042fa54b03b956c4a001f447ec3e69392ef4146cffa47bc7a6f65adedaad.scope: Deactivated successfully.
Nov 29 06:20:38 compute-0 sudo[93103]: pam_unix(sudo:session): session closed for user root
Nov 29 06:20:38 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 06:20:38 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:20:38 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 06:20:39 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:20:39 compute-0 ceph-mgr[74948]: [progress INFO root] update: starting ev f17e2d30-47bb-4995-954f-855268d5acf9 (Updating rgw.rgw deployment (+3 -> 3))
Nov 29 06:20:39 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.pkypgd", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0) v1
Nov 29 06:20:39 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.pkypgd", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 29 06:20:40 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v152: 177 pgs: 75 peering, 102 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:20:40 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 4.10 scrub starts
Nov 29 06:20:40 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 4.10 scrub ok
Nov 29 06:20:41 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.pkypgd", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 29 06:20:41 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0) v1
Nov 29 06:20:41 compute-0 ceph-mon[74654]: pgmap v150: 177 pgs: 55 peering, 122 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:20:41 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:20:41 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:20:41 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.pkypgd", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 29 06:20:41 compute-0 ceph-mon[74654]: 7.c scrub starts
Nov 29 06:20:41 compute-0 ceph-mon[74654]: 7.c scrub ok
Nov 29 06:20:41 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:20:41 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 06:20:41 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:20:41 compute-0 ceph-mgr[74948]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-2.pkypgd on compute-2
Nov 29 06:20:41 compute-0 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-2.pkypgd on compute-2
Nov 29 06:20:41 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v153: 177 pgs: 20 peering, 157 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:20:42 compute-0 ceph-mon[74654]: 3.9 scrub starts
Nov 29 06:20:42 compute-0 ceph-mon[74654]: 3.9 scrub ok
Nov 29 06:20:42 compute-0 ceph-mon[74654]: pgmap v152: 177 pgs: 75 peering, 102 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:20:42 compute-0 ceph-mon[74654]: 4.10 scrub starts
Nov 29 06:20:42 compute-0 ceph-mon[74654]: 4.10 scrub ok
Nov 29 06:20:42 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.pkypgd", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 29 06:20:42 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:20:42 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:20:42 compute-0 ceph-mon[74654]: Deploying daemon rgw.rgw.compute-2.pkypgd on compute-2
Nov 29 06:20:42 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e44 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 06:20:42 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 4.11 deep-scrub starts
Nov 29 06:20:42 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 4.11 deep-scrub ok
Nov 29 06:20:43 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 06:20:43 compute-0 ceph-mon[74654]: pgmap v153: 177 pgs: 20 peering, 157 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:20:43 compute-0 ceph-mon[74654]: 4.11 deep-scrub starts
Nov 29 06:20:43 compute-0 ceph-mon[74654]: 4.11 deep-scrub ok
Nov 29 06:20:43 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v154: 177 pgs: 177 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:20:44 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e44 do_prune osdmap full prune enabled
Nov 29 06:20:45 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:20:45 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 06:20:45 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v155: 177 pgs: 177 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:20:46 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e45 e45: 3 total, 3 up, 3 in
Nov 29 06:20:46 compute-0 ceph-mon[74654]: pgmap v154: 177 pgs: 177 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:20:46 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e45: 3 total, 3 up, 3 in
Nov 29 06:20:46 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} v 0) v1
Nov 29 06:20:46 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.pkypgd' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Nov 29 06:20:46 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 45 pg[8.0( empty local-lis/les=0/0 n=0 ec=45/45 lis/c=0/0 les/c/f=0/0/0 sis=45) [1] r=0 lpr=45 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:20:47 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e45 do_prune osdmap full prune enabled
Nov 29 06:20:47 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:20:47 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Nov 29 06:20:47 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.pkypgd' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Nov 29 06:20:47 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e46 e46: 3 total, 3 up, 3 in
Nov 29 06:20:47 compute-0 ceph-mon[74654]: 3.1a scrub starts
Nov 29 06:20:47 compute-0 ceph-mon[74654]: 3.1a scrub ok
Nov 29 06:20:47 compute-0 ceph-mon[74654]: 7.d scrub starts
Nov 29 06:20:47 compute-0 ceph-mon[74654]: 7.d scrub ok
Nov 29 06:20:47 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:20:47 compute-0 ceph-mon[74654]: pgmap v155: 177 pgs: 177 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:20:47 compute-0 ceph-mon[74654]: osdmap e45: 3 total, 3 up, 3 in
Nov 29 06:20:47 compute-0 ceph-mon[74654]: from='client.? 192.168.122.102:0/1290272359' entity='client.rgw.rgw.compute-2.pkypgd' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Nov 29 06:20:47 compute-0 ceph-mon[74654]: from='client.? ' entity='client.rgw.rgw.compute-2.pkypgd' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Nov 29 06:20:47 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e46: 3 total, 3 up, 3 in
Nov 29 06:20:47 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v158: 178 pgs: 1 unknown, 177 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:20:48 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:20:48 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.cbugbv", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0) v1
Nov 29 06:20:48 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.cbugbv", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 29 06:20:48 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 46 pg[8.0( empty local-lis/les=45/46 n=0 ec=45/45 lis/c=0/0 les/c/f=0/0/0 sis=45) [1] r=0 lpr=45 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:20:48 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.cbugbv", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 29 06:20:48 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0) v1
Nov 29 06:20:48 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:20:48 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 06:20:48 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:20:48 compute-0 ceph-mgr[74948]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-1.cbugbv on compute-1
Nov 29 06:20:48 compute-0 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-1.cbugbv on compute-1
Nov 29 06:20:48 compute-0 ceph-mon[74654]: 7.12 scrub starts
Nov 29 06:20:48 compute-0 ceph-mon[74654]: 7.12 scrub ok
Nov 29 06:20:48 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:20:48 compute-0 ceph-mon[74654]: from='client.? ' entity='client.rgw.rgw.compute-2.pkypgd' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Nov 29 06:20:48 compute-0 ceph-mon[74654]: osdmap e46: 3 total, 3 up, 3 in
Nov 29 06:20:48 compute-0 ceph-mon[74654]: pgmap v158: 178 pgs: 1 unknown, 177 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:20:48 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:20:48 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.cbugbv", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 29 06:20:48 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.cbugbv", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 29 06:20:48 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:20:48 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:20:48 compute-0 ceph-mon[74654]: Deploying daemon rgw.rgw.compute-1.cbugbv on compute-1
Nov 29 06:20:49 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e46 do_prune osdmap full prune enabled
Nov 29 06:20:49 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e47 e47: 3 total, 3 up, 3 in
Nov 29 06:20:49 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e47: 3 total, 3 up, 3 in
Nov 29 06:20:49 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0) v1
Nov 29 06:20:49 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.pkypgd' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Nov 29 06:20:49 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 47 pg[9.0( empty local-lis/les=0/0 n=0 ec=47/47 lis/c=0/0 les/c/f=0/0/0 sis=47) [1] r=0 lpr=47 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:20:49 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v160: 179 pgs: 2 unknown, 177 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:20:50 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e47 do_prune osdmap full prune enabled
Nov 29 06:20:50 compute-0 ceph-mon[74654]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 29 06:20:50 compute-0 ceph-mon[74654]: osdmap e47: 3 total, 3 up, 3 in
Nov 29 06:20:50 compute-0 ceph-mon[74654]: from='client.? 192.168.122.102:0/1290272359' entity='client.rgw.rgw.compute-2.pkypgd' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Nov 29 06:20:50 compute-0 ceph-mon[74654]: from='client.? ' entity='client.rgw.rgw.compute-2.pkypgd' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Nov 29 06:20:50 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 06:20:50 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.pkypgd' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Nov 29 06:20:50 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e48 e48: 3 total, 3 up, 3 in
Nov 29 06:20:50 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e48: 3 total, 3 up, 3 in
Nov 29 06:20:50 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 48 pg[9.0( empty local-lis/les=47/48 n=0 ec=47/47 lis/c=0/0 les/c/f=0/0/0 sis=47) [1] r=0 lpr=47 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:20:50 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:20:50 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 06:20:51 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:20:51 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Nov 29 06:20:51 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:20:51 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.vmptkp", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0) v1
Nov 29 06:20:51 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.vmptkp", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 29 06:20:51 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.vmptkp", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 29 06:20:51 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0) v1
Nov 29 06:20:51 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:20:51 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 06:20:51 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:20:51 compute-0 ceph-mgr[74948]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-0.vmptkp on compute-0
Nov 29 06:20:51 compute-0 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-0.vmptkp on compute-0
Nov 29 06:20:51 compute-0 sudo[93268]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:20:51 compute-0 sudo[93268]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:20:51 compute-0 sudo[93268]: pam_unix(sudo:session): session closed for user root
Nov 29 06:20:51 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 4.12 scrub starts
Nov 29 06:20:51 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 4.12 scrub ok
Nov 29 06:20:51 compute-0 sudo[93293]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:20:51 compute-0 sudo[93293]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:20:51 compute-0 sudo[93293]: pam_unix(sudo:session): session closed for user root
Nov 29 06:20:51 compute-0 sudo[93318]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:20:51 compute-0 sudo[93318]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:20:51 compute-0 sudo[93318]: pam_unix(sudo:session): session closed for user root
Nov 29 06:20:51 compute-0 ceph-mon[74654]: pgmap v160: 179 pgs: 2 unknown, 177 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:20:51 compute-0 ceph-mon[74654]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 29 06:20:51 compute-0 ceph-mon[74654]: from='client.? ' entity='client.rgw.rgw.compute-2.pkypgd' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Nov 29 06:20:51 compute-0 ceph-mon[74654]: osdmap e48: 3 total, 3 up, 3 in
Nov 29 06:20:51 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:20:51 compute-0 ceph-mon[74654]: 7.15 scrub starts
Nov 29 06:20:51 compute-0 ceph-mon[74654]: 7.15 scrub ok
Nov 29 06:20:51 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:20:51 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:20:51 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.vmptkp", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 29 06:20:51 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.vmptkp", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 29 06:20:51 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:20:51 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:20:51 compute-0 sudo[93343]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047
Nov 29 06:20:51 compute-0 sudo[93343]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:20:51 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e48 do_prune osdmap full prune enabled
Nov 29 06:20:51 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v162: 179 pgs: 1 creating+peering, 178 active+clean; 450 KiB data, 80 MiB used, 21 GiB / 21 GiB avail; 705 B/s rd, 705 B/s wr, 1 op/s
Nov 29 06:20:52 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e49 e49: 3 total, 3 up, 3 in
Nov 29 06:20:52 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e49: 3 total, 3 up, 3 in
Nov 29 06:20:52 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0) v1
Nov 29 06:20:52 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.pkypgd' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 29 06:20:52 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0) v1
Nov 29 06:20:52 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.cbugbv' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 29 06:20:52 compute-0 podman[93408]: 2025-11-29 06:20:52.144450804 +0000 UTC m=+0.093100752 container create 69dfe31d98b6ca5d82f1d1be9292adf10c6995b65be0c2080ee98d918d375f90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_blackwell, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 06:20:52 compute-0 podman[93408]: 2025-11-29 06:20:52.077649091 +0000 UTC m=+0.026299059 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:20:52 compute-0 systemd[1]: Started libpod-conmon-69dfe31d98b6ca5d82f1d1be9292adf10c6995b65be0c2080ee98d918d375f90.scope.
Nov 29 06:20:52 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:20:52 compute-0 podman[93408]: 2025-11-29 06:20:52.390577689 +0000 UTC m=+0.339227627 container init 69dfe31d98b6ca5d82f1d1be9292adf10c6995b65be0c2080ee98d918d375f90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_blackwell, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 06:20:52 compute-0 podman[93408]: 2025-11-29 06:20:52.399036993 +0000 UTC m=+0.347686901 container start 69dfe31d98b6ca5d82f1d1be9292adf10c6995b65be0c2080ee98d918d375f90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_blackwell, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 29 06:20:52 compute-0 silly_blackwell[93424]: 167 167
Nov 29 06:20:52 compute-0 systemd[1]: libpod-69dfe31d98b6ca5d82f1d1be9292adf10c6995b65be0c2080ee98d918d375f90.scope: Deactivated successfully.
Nov 29 06:20:52 compute-0 podman[93408]: 2025-11-29 06:20:52.422195669 +0000 UTC m=+0.370845617 container attach 69dfe31d98b6ca5d82f1d1be9292adf10c6995b65be0c2080ee98d918d375f90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_blackwell, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 06:20:52 compute-0 podman[93408]: 2025-11-29 06:20:52.422813917 +0000 UTC m=+0.371463825 container died 69dfe31d98b6ca5d82f1d1be9292adf10c6995b65be0c2080ee98d918d375f90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_blackwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 06:20:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-25189d67f06a1025f7e8d35e6bdc5f68bb356700a8f22a35d856f7e5c0092d66-merged.mount: Deactivated successfully.
Nov 29 06:20:52 compute-0 podman[93408]: 2025-11-29 06:20:52.582273098 +0000 UTC m=+0.530923006 container remove 69dfe31d98b6ca5d82f1d1be9292adf10c6995b65be0c2080ee98d918d375f90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_blackwell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 06:20:52 compute-0 systemd[1]: libpod-conmon-69dfe31d98b6ca5d82f1d1be9292adf10c6995b65be0c2080ee98d918d375f90.scope: Deactivated successfully.
Nov 29 06:20:52 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e49 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 06:20:52 compute-0 ceph-mon[74654]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Nov 29 06:20:52 compute-0 ceph-mon[74654]: Deploying daemon rgw.rgw.compute-0.vmptkp on compute-0
Nov 29 06:20:52 compute-0 ceph-mon[74654]: 4.12 scrub starts
Nov 29 06:20:52 compute-0 ceph-mon[74654]: 4.12 scrub ok
Nov 29 06:20:52 compute-0 ceph-mon[74654]: pgmap v162: 179 pgs: 1 creating+peering, 178 active+clean; 450 KiB data, 80 MiB used, 21 GiB / 21 GiB avail; 705 B/s rd, 705 B/s wr, 1 op/s
Nov 29 06:20:52 compute-0 ceph-mon[74654]: osdmap e49: 3 total, 3 up, 3 in
Nov 29 06:20:52 compute-0 ceph-mon[74654]: from='client.? 192.168.122.102:0/1290272359' entity='client.rgw.rgw.compute-2.pkypgd' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 29 06:20:52 compute-0 ceph-mon[74654]: from='client.? ' entity='client.rgw.rgw.compute-2.pkypgd' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 29 06:20:52 compute-0 ceph-mon[74654]: from='client.? 192.168.122.101:0/1253186838' entity='client.rgw.rgw.compute-1.cbugbv' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 29 06:20:52 compute-0 ceph-mon[74654]: from='client.? ' entity='client.rgw.rgw.compute-1.cbugbv' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 29 06:20:52 compute-0 ceph-mon[74654]: 7.17 deep-scrub starts
Nov 29 06:20:52 compute-0 ceph-mon[74654]: 7.17 deep-scrub ok
Nov 29 06:20:52 compute-0 systemd[1]: Reloading.
Nov 29 06:20:52 compute-0 systemd-rc-local-generator[93471]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 06:20:52 compute-0 systemd-sysv-generator[93474]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 06:20:53 compute-0 systemd[1]: Reloading.
Nov 29 06:20:53 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e49 do_prune osdmap full prune enabled
Nov 29 06:20:53 compute-0 systemd-sysv-generator[93515]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 06:20:53 compute-0 systemd-rc-local-generator[93512]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 06:20:53 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.pkypgd' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Nov 29 06:20:53 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.cbugbv' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Nov 29 06:20:53 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e50 e50: 3 total, 3 up, 3 in
Nov 29 06:20:53 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e50: 3 total, 3 up, 3 in
Nov 29 06:20:53 compute-0 systemd[1]: Starting Ceph rgw.rgw.compute-0.vmptkp for 336ec58c-893b-528f-a0c1-6ed1196bc047...
Nov 29 06:20:53 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 4.16 scrub starts
Nov 29 06:20:53 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 4.16 scrub ok
Nov 29 06:20:53 compute-0 podman[93569]: 2025-11-29 06:20:53.58298532 +0000 UTC m=+0.082491136 container create 74d56036cbc89ee6065295b06ea4b6794f8c605a9bb9107773989fec28b7c37d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-rgw-rgw-compute-0-vmptkp, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 06:20:53 compute-0 podman[93569]: 2025-11-29 06:20:53.524748513 +0000 UTC m=+0.024254349 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:20:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eaf99ae86100057763cdf896c416afb84015183091bd9e0a7fce55dd18cb20a1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:20:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eaf99ae86100057763cdf896c416afb84015183091bd9e0a7fce55dd18cb20a1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:20:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eaf99ae86100057763cdf896c416afb84015183091bd9e0a7fce55dd18cb20a1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 06:20:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eaf99ae86100057763cdf896c416afb84015183091bd9e0a7fce55dd18cb20a1/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-0.vmptkp supports timestamps until 2038 (0x7fffffff)
Nov 29 06:20:53 compute-0 podman[93569]: 2025-11-29 06:20:53.771307102 +0000 UTC m=+0.270812928 container init 74d56036cbc89ee6065295b06ea4b6794f8c605a9bb9107773989fec28b7c37d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-rgw-rgw-compute-0-vmptkp, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 06:20:53 compute-0 podman[93569]: 2025-11-29 06:20:53.778834378 +0000 UTC m=+0.278340184 container start 74d56036cbc89ee6065295b06ea4b6794f8c605a9bb9107773989fec28b7c37d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-rgw-rgw-compute-0-vmptkp, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 29 06:20:53 compute-0 bash[93569]: 74d56036cbc89ee6065295b06ea4b6794f8c605a9bb9107773989fec28b7c37d
Nov 29 06:20:53 compute-0 systemd[1]: Started Ceph rgw.rgw.compute-0.vmptkp for 336ec58c-893b-528f-a0c1-6ed1196bc047.
Nov 29 06:20:53 compute-0 sudo[93343]: pam_unix(sudo:session): session closed for user root
Nov 29 06:20:53 compute-0 radosgw[93592]: deferred set uid:gid to 167:167 (ceph:ceph)
Nov 29 06:20:53 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 06:20:53 compute-0 radosgw[93592]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process radosgw, pid 2
Nov 29 06:20:53 compute-0 radosgw[93592]: framework: beast
Nov 29 06:20:53 compute-0 radosgw[93592]: framework conf key: endpoint, val: 192.168.122.100:8082
Nov 29 06:20:53 compute-0 radosgw[93592]: init_numa not setting numa affinity
Nov 29 06:20:53 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v165: 180 pgs: 1 unknown, 1 creating+peering, 178 active+clean; 450 KiB data, 81 MiB used, 21 GiB / 21 GiB avail; 841 B/s rd, 841 B/s wr, 1 op/s
Nov 29 06:20:54 compute-0 ceph-mgr[74948]: [balancer INFO root] Optimize plan auto_2025-11-29_06:20:54
Nov 29 06:20:54 compute-0 ceph-mgr[74948]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 06:20:54 compute-0 ceph-mgr[74948]: [balancer INFO root] Some PGs (0.005556) are unknown; try again later
Nov 29 06:20:54 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e50 do_prune osdmap full prune enabled
Nov 29 06:20:54 compute-0 ceph-mon[74654]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Nov 29 06:20:54 compute-0 ceph-mon[74654]: from='client.? ' entity='client.rgw.rgw.compute-2.pkypgd' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Nov 29 06:20:54 compute-0 ceph-mon[74654]: from='client.? ' entity='client.rgw.rgw.compute-1.cbugbv' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Nov 29 06:20:54 compute-0 ceph-mon[74654]: osdmap e50: 3 total, 3 up, 3 in
Nov 29 06:20:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:20:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:20:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:20:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:20:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:20:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:20:54 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 4.17 scrub starts
Nov 29 06:20:54 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 4.17 scrub ok
Nov 29 06:20:54 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e51 e51: 3 total, 3 up, 3 in
Nov 29 06:20:54 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:20:54 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e51: 3 total, 3 up, 3 in
Nov 29 06:20:54 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 06:20:54 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 51 pg[11.0( empty local-lis/les=0/0 n=0 ec=51/51 lis/c=0/0 les/c/f=0/0/0 sis=51) [1] r=0 lpr=51 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:20:54 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0) v1
Nov 29 06:20:54 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.cbugbv' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 29 06:20:54 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0) v1
Nov 29 06:20:54 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/49466279' entity='client.rgw.rgw.compute-0.vmptkp' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 29 06:20:55 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e51 do_prune osdmap full prune enabled
Nov 29 06:20:55 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v167: 181 pgs: 1 creating+peering, 1 unknown, 179 active+clean; 450 KiB data, 81 MiB used, 21 GiB / 21 GiB avail; 3.3 KiB/s rd, 402 B/s wr, 4 op/s
Nov 29 06:20:56 compute-0 ceph-mon[74654]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 29 06:20:56 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:20:56 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Nov 29 06:20:56 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 4.1e scrub starts
Nov 29 06:20:56 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 4.1e scrub ok
Nov 29 06:20:56 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0) v1
Nov 29 06:20:56 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.pkypgd' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 29 06:20:57 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.cbugbv' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Nov 29 06:20:57 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/49466279' entity='client.rgw.rgw.compute-0.vmptkp' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Nov 29 06:20:57 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e52 e52: 3 total, 3 up, 3 in
Nov 29 06:20:57 compute-0 ceph-mon[74654]: 4.16 scrub starts
Nov 29 06:20:57 compute-0 ceph-mon[74654]: 4.16 scrub ok
Nov 29 06:20:57 compute-0 ceph-mon[74654]: pgmap v165: 180 pgs: 1 unknown, 1 creating+peering, 178 active+clean; 450 KiB data, 81 MiB used, 21 GiB / 21 GiB avail; 841 B/s rd, 841 B/s wr, 1 op/s
Nov 29 06:20:57 compute-0 ceph-mon[74654]: 4.17 scrub starts
Nov 29 06:20:57 compute-0 ceph-mon[74654]: 4.17 scrub ok
Nov 29 06:20:57 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:20:57 compute-0 ceph-mon[74654]: osdmap e51: 3 total, 3 up, 3 in
Nov 29 06:20:57 compute-0 ceph-mon[74654]: from='client.? 192.168.122.101:0/111233770' entity='client.rgw.rgw.compute-1.cbugbv' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 29 06:20:57 compute-0 ceph-mon[74654]: from='client.? ' entity='client.rgw.rgw.compute-1.cbugbv' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 29 06:20:57 compute-0 ceph-mon[74654]: from='client.? 192.168.122.100:0/49466279' entity='client.rgw.rgw.compute-0.vmptkp' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 29 06:20:57 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e52: 3 total, 3 up, 3 in
Nov 29 06:20:57 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0) v1
Nov 29 06:20:57 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.cbugbv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 29 06:20:57 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0) v1
Nov 29 06:20:57 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/49466279' entity='client.rgw.rgw.compute-0.vmptkp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 29 06:20:57 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:20:57 compute-0 ceph-mgr[74948]: [progress INFO root] complete: finished ev f17e2d30-47bb-4995-954f-855268d5acf9 (Updating rgw.rgw deployment (+3 -> 3))
Nov 29 06:20:57 compute-0 ceph-mgr[74948]: [progress INFO root] Completed event f17e2d30-47bb-4995-954f-855268d5acf9 (Updating rgw.rgw deployment (+3 -> 3)) in 18 seconds
Nov 29 06:20:57 compute-0 ceph-mgr[74948]: [cephadm INFO cephadm.services.cephadmservice] Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Nov 29 06:20:57 compute-0 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Nov 29 06:20:57 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Nov 29 06:20:57 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 52 pg[11.0( empty local-lis/les=51/52 n=0 ec=51/51 lis/c=0/0 les/c/f=0/0/0 sis=51) [1] r=0 lpr=51 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:20:57 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:20:57 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Nov 29 06:20:57 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:20:57 compute-0 ceph-mgr[74948]: [progress INFO root] update: starting ev f69c7611-808e-4a28-94ca-4532cf709bfe (Updating mds.cephfs deployment (+3 -> 3))
Nov 29 06:20:57 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.gxdwyy", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) v1
Nov 29 06:20:57 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.gxdwyy", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Nov 29 06:20:57 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.gxdwyy", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Nov 29 06:20:57 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 06:20:57 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:20:57 compute-0 ceph-mgr[74948]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-2.gxdwyy on compute-2
Nov 29 06:20:57 compute-0 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-2.gxdwyy on compute-2
Nov 29 06:20:57 compute-0 ceph-mgr[74948]: [progress INFO root] Writing back 12 completed events
Nov 29 06:20:57 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 29 06:20:57 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:20:57 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e52 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 06:20:57 compute-0 sshd-session[93665]: Received disconnect from 103.147.159.91 port 52594:11: Bye Bye [preauth]
Nov 29 06:20:57 compute-0 sshd-session[93665]: Disconnected from authenticating user root 103.147.159.91 port 52594 [preauth]
Nov 29 06:20:58 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v169: 181 pgs: 1 creating+peering, 180 active+clean; 450 KiB data, 81 MiB used, 21 GiB / 21 GiB avail; 2.9 KiB/s rd, 346 B/s wr, 4 op/s
Nov 29 06:20:58 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e52 do_prune osdmap full prune enabled
Nov 29 06:20:58 compute-0 ceph-mon[74654]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Nov 29 06:20:58 compute-0 ceph-mon[74654]: 5.e deep-scrub starts
Nov 29 06:20:58 compute-0 ceph-mon[74654]: pgmap v167: 181 pgs: 1 creating+peering, 1 unknown, 179 active+clean; 450 KiB data, 81 MiB used, 21 GiB / 21 GiB avail; 3.3 KiB/s rd, 402 B/s wr, 4 op/s
Nov 29 06:20:58 compute-0 ceph-mon[74654]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 29 06:20:58 compute-0 ceph-mon[74654]: 7.19 scrub starts
Nov 29 06:20:58 compute-0 ceph-mon[74654]: 7.19 scrub ok
Nov 29 06:20:58 compute-0 ceph-mon[74654]: from='client.? 192.168.122.102:0/2594248517' entity='client.rgw.rgw.compute-2.pkypgd' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 29 06:20:58 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:20:58 compute-0 ceph-mon[74654]: 4.1e scrub starts
Nov 29 06:20:58 compute-0 ceph-mon[74654]: 4.1e scrub ok
Nov 29 06:20:58 compute-0 ceph-mon[74654]: 5.e deep-scrub ok
Nov 29 06:20:58 compute-0 ceph-mon[74654]: from='client.? ' entity='client.rgw.rgw.compute-2.pkypgd' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 29 06:20:58 compute-0 ceph-mon[74654]: from='client.? ' entity='client.rgw.rgw.compute-1.cbugbv' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Nov 29 06:20:58 compute-0 ceph-mon[74654]: from='client.? 192.168.122.100:0/49466279' entity='client.rgw.rgw.compute-0.vmptkp' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Nov 29 06:20:58 compute-0 ceph-mon[74654]: osdmap e52: 3 total, 3 up, 3 in
Nov 29 06:20:58 compute-0 ceph-mon[74654]: from='client.? 192.168.122.101:0/111233770' entity='client.rgw.rgw.compute-1.cbugbv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 29 06:20:58 compute-0 ceph-mon[74654]: from='client.? ' entity='client.rgw.rgw.compute-1.cbugbv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 29 06:20:58 compute-0 ceph-mon[74654]: from='client.? 192.168.122.100:0/49466279' entity='client.rgw.rgw.compute-0.vmptkp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 29 06:20:58 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:20:58 compute-0 ceph-mon[74654]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Nov 29 06:20:58 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:20:58 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:20:58 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.gxdwyy", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Nov 29 06:20:58 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.gxdwyy", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Nov 29 06:20:58 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:20:58 compute-0 ceph-mon[74654]: Deploying daemon mds.cephfs.compute-2.gxdwyy on compute-2
Nov 29 06:20:58 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:20:58 compute-0 sudo[93690]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kvkdvtjvfaojwvdhhxgyoqpplqegvzky ; /usr/bin/python3'
Nov 29 06:20:58 compute-0 sudo[93690]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:20:58 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.pkypgd' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Nov 29 06:20:58 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.cbugbv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Nov 29 06:20:58 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/49466279' entity='client.rgw.rgw.compute-0.vmptkp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Nov 29 06:20:58 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e53 e53: 3 total, 3 up, 3 in
Nov 29 06:20:58 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e53: 3 total, 3 up, 3 in
Nov 29 06:20:58 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0) v1
Nov 29 06:20:58 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.pkypgd' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 29 06:20:58 compute-0 python3[93692]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v18 --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user info --uid openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:20:58 compute-0 podman[93693]: 2025-11-29 06:20:58.60814074 +0000 UTC m=+0.068187165 container create fc3580c8deb7cfcdd4aae3a428e0a3f8b5fa5c03e3e6b50d583858c644709514 (image=quay.io/ceph/ceph:v18, name=friendly_panini, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 29 06:20:58 compute-0 systemd[1]: Started libpod-conmon-fc3580c8deb7cfcdd4aae3a428e0a3f8b5fa5c03e3e6b50d583858c644709514.scope.
Nov 29 06:20:58 compute-0 podman[93693]: 2025-11-29 06:20:58.566456159 +0000 UTC m=+0.026502584 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 06:20:58 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:20:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92963444a5bb3debf35d5d96d1894f680d7f37ae86774328fc581b9236be2d31/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:20:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92963444a5bb3debf35d5d96d1894f680d7f37ae86774328fc581b9236be2d31/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:20:59 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 06:20:59 compute-0 podman[93693]: 2025-11-29 06:20:59.428704154 +0000 UTC m=+0.888750659 container init fc3580c8deb7cfcdd4aae3a428e0a3f8b5fa5c03e3e6b50d583858c644709514 (image=quay.io/ceph/ceph:v18, name=friendly_panini, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 29 06:20:59 compute-0 podman[93693]: 2025-11-29 06:20:59.440517564 +0000 UTC m=+0.900564009 container start fc3580c8deb7cfcdd4aae3a428e0a3f8b5fa5c03e3e6b50d583858c644709514 (image=quay.io/ceph/ceph:v18, name=friendly_panini, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 29 06:20:59 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e53 do_prune osdmap full prune enabled
Nov 29 06:20:59 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:20:59 compute-0 podman[93693]: 2025-11-29 06:20:59.84671987 +0000 UTC m=+1.306766325 container attach fc3580c8deb7cfcdd4aae3a428e0a3f8b5fa5c03e3e6b50d583858c644709514 (image=quay.io/ceph/ceph:v18, name=friendly_panini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 06:20:59 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 06:21:00 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v171: 181 pgs: 1 creating+peering, 180 active+clean; 450 KiB data, 81 MiB used, 21 GiB / 21 GiB avail; 2.8 KiB/s rd, 341 B/s wr, 3 op/s
Nov 29 06:21:00 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).mds e3 new map
Nov 29 06:21:00 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).mds e3 print_map
                                           e3
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        2
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-11-29T06:19:35.588785+0000
                                           modified        2025-11-29T06:19:35.589013+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        
                                           up        {}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-2.gxdwyy{-1:24145} state up:standby seq 1 addr [v2:192.168.122.102:6804/1811763433,v1:192.168.122.102:6805/1811763433] compat {c=[1],r=[1],i=[7ff]}]
Nov 29 06:21:00 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.pkypgd' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Nov 29 06:21:00 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e54 e54: 3 total, 3 up, 3 in
Nov 29 06:21:00 compute-0 ceph-mon[74654]: pgmap v169: 181 pgs: 1 creating+peering, 180 active+clean; 450 KiB data, 81 MiB used, 21 GiB / 21 GiB avail; 2.9 KiB/s rd, 346 B/s wr, 4 op/s
Nov 29 06:21:00 compute-0 ceph-mon[74654]: 7.1a deep-scrub starts
Nov 29 06:21:00 compute-0 ceph-mon[74654]: 7.1a deep-scrub ok
Nov 29 06:21:00 compute-0 ceph-mon[74654]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Nov 29 06:21:00 compute-0 ceph-mon[74654]: from='client.? ' entity='client.rgw.rgw.compute-2.pkypgd' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Nov 29 06:21:00 compute-0 ceph-mon[74654]: from='client.? ' entity='client.rgw.rgw.compute-1.cbugbv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Nov 29 06:21:00 compute-0 ceph-mon[74654]: from='client.? 192.168.122.100:0/49466279' entity='client.rgw.rgw.compute-0.vmptkp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Nov 29 06:21:00 compute-0 ceph-mon[74654]: osdmap e53: 3 total, 3 up, 3 in
Nov 29 06:21:00 compute-0 ceph-mon[74654]: from='client.? 192.168.122.102:0/2594248517' entity='client.rgw.rgw.compute-2.pkypgd' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 29 06:21:00 compute-0 ceph-mon[74654]: from='client.? ' entity='client.rgw.rgw.compute-2.pkypgd' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 29 06:21:00 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e54: 3 total, 3 up, 3 in
Nov 29 06:21:00 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.102:6804/1811763433,v1:192.168.122.102:6805/1811763433] up:boot
Nov 29 06:21:00 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).mds e3 assigned standby [v2:192.168.122.102:6804/1811763433,v1:192.168.122.102:6805/1811763433] as mds.0
Nov 29 06:21:00 compute-0 ceph-mon[74654]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-2.gxdwyy assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Nov 29 06:21:00 compute-0 ceph-mon[74654]: log_channel(cluster) log [INF] : Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Nov 29 06:21:00 compute-0 ceph-mon[74654]: log_channel(cluster) log [INF] : Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Nov 29 06:21:00 compute-0 ceph-mon[74654]: log_channel(cluster) log [INF] : Cluster is now healthy
Nov 29 06:21:00 compute-0 ceph-mgr[74948]: mgr.server handle_open ignoring open from mds.cephfs.compute-2.gxdwyy v2:192.168.122.102:6804/1811763433; not ready for session (expect reconnect)
Nov 29 06:21:00 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : fsmap cephfs:0 1 up:standby
Nov 29 06:21:00 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-2.gxdwyy"} v 0) v1
Nov 29 06:21:00 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.gxdwyy"}]: dispatch
Nov 29 06:21:00 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).mds e3 all = 0
Nov 29 06:21:00 compute-0 friendly_panini[93709]: could not fetch user info: no user info saved
Nov 29 06:21:00 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).mds e4 new map
Nov 29 06:21:00 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).mds e4 print_map
                                           e4
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        4
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-11-29T06:19:35.588785+0000
                                           modified        2025-11-29T06:21:00.645745+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        0
                                           up        {0=24145}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           [mds.cephfs.compute-2.gxdwyy{0:24145} state up:creating seq 1 addr [v2:192.168.122.102:6804/1811763433,v1:192.168.122.102:6805/1811763433] compat {c=[1],r=[1],i=[7ff]}]
                                            
                                            
Nov 29 06:21:00 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:21:00 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.gxdwyy=up:creating}
Nov 29 06:21:00 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Nov 29 06:21:00 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:21:00 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.jzycnf", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) v1
Nov 29 06:21:00 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.jzycnf", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Nov 29 06:21:00 compute-0 ceph-mon[74654]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-2.gxdwyy is now active in filesystem cephfs as rank 0
Nov 29 06:21:00 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.jzycnf", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Nov 29 06:21:00 compute-0 systemd[1]: libpod-fc3580c8deb7cfcdd4aae3a428e0a3f8b5fa5c03e3e6b50d583858c644709514.scope: Deactivated successfully.
Nov 29 06:21:00 compute-0 podman[93693]: 2025-11-29 06:21:00.747117353 +0000 UTC m=+2.207163778 container died fc3580c8deb7cfcdd4aae3a428e0a3f8b5fa5c03e3e6b50d583858c644709514 (image=quay.io/ceph/ceph:v18, name=friendly_panini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 29 06:21:00 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 06:21:00 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:21:00 compute-0 ceph-mgr[74948]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-0.jzycnf on compute-0
Nov 29 06:21:00 compute-0 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-0.jzycnf on compute-0
Nov 29 06:21:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-92963444a5bb3debf35d5d96d1894f680d7f37ae86774328fc581b9236be2d31-merged.mount: Deactivated successfully.
Nov 29 06:21:00 compute-0 podman[93693]: 2025-11-29 06:21:00.806093441 +0000 UTC m=+2.266139866 container remove fc3580c8deb7cfcdd4aae3a428e0a3f8b5fa5c03e3e6b50d583858c644709514 (image=quay.io/ceph/ceph:v18, name=friendly_panini, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 06:21:00 compute-0 systemd[1]: libpod-conmon-fc3580c8deb7cfcdd4aae3a428e0a3f8b5fa5c03e3e6b50d583858c644709514.scope: Deactivated successfully.
Nov 29 06:21:00 compute-0 sudo[93795]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:21:00 compute-0 sudo[93795]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:21:00 compute-0 sudo[93690]: pam_unix(sudo:session): session closed for user root
Nov 29 06:21:00 compute-0 sudo[93795]: pam_unix(sudo:session): session closed for user root
Nov 29 06:21:00 compute-0 radosgw[93592]: LDAP not started since no server URIs were provided in the configuration.
Nov 29 06:21:00 compute-0 radosgw[93592]: framework: beast
Nov 29 06:21:00 compute-0 radosgw[93592]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Nov 29 06:21:00 compute-0 radosgw[93592]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Nov 29 06:21:00 compute-0 radosgw[93592]: INFO: RGWReshardLock::lock found lock on reshard.0000000000 to be held by another RGW process; skipping for now
Nov 29 06:21:00 compute-0 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-rgw-rgw-compute-0-vmptkp[93585]: 2025-11-29T06:21:00.840+0000 7f7db64b5940 -1 LDAP not started since no server URIs were provided in the configuration.
Nov 29 06:21:00 compute-0 radosgw[93592]: INFO: RGWReshardLock::lock found lock on reshard.0000000002 to be held by another RGW process; skipping for now
Nov 29 06:21:00 compute-0 radosgw[93592]: starting handler: beast
Nov 29 06:21:00 compute-0 radosgw[93592]: INFO: RGWReshardLock::lock found lock on reshard.0000000004 to be held by another RGW process; skipping for now
Nov 29 06:21:00 compute-0 radosgw[93592]: set uid:gid to 167:167 (ceph:ceph)
Nov 29 06:21:00 compute-0 radosgw[93592]: INFO: RGWReshardLock::lock found lock on reshard.0000000006 to be held by another RGW process; skipping for now
Nov 29 06:21:00 compute-0 sudo[93857]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:21:00 compute-0 sudo[93857]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:21:00 compute-0 sudo[93857]: pam_unix(sudo:session): session closed for user root
Nov 29 06:21:00 compute-0 radosgw[93592]: INFO: RGWReshardLock::lock found lock on reshard.0000000008 to be held by another RGW process; skipping for now
Nov 29 06:21:00 compute-0 radosgw[93592]: mgrc service_daemon_register rgw.14361 metadata {arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,container_hostname=compute-0,container_image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0,cpu=AMD EPYC-Rome Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.100:8082,frontend_type#0=beast,hostname=compute-0,id=rgw.compute-0.vmptkp,kernel_description=#1 SMP PREEMPT_DYNAMIC Thu Nov 20 14:15:03 UTC 2025,kernel_version=5.14.0-642.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864324,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=916ce3c8-b215-47fd-909b-03c5b552b52f,zone_name=default,zonegroup_id=a7fe8251-a74c-4f06-a680-d530d14bb192,zonegroup_name=default}
Nov 29 06:21:00 compute-0 radosgw[93592]: INFO: RGWReshardLock::lock found lock on reshard.0000000010 to be held by another RGW process; skipping for now
Nov 29 06:21:00 compute-0 sudo[94235]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:21:00 compute-0 sudo[94235]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:21:00 compute-0 sudo[94235]: pam_unix(sudo:session): session closed for user root
Nov 29 06:21:00 compute-0 radosgw[93592]: INFO: RGWReshardLock::lock found lock on reshard.0000000012 to be held by another RGW process; skipping for now
Nov 29 06:21:00 compute-0 sudo[94447]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pcqbuglsmbrpczaonayhtqcbwkbbumoz ; /usr/bin/python3'
Nov 29 06:21:00 compute-0 sudo[94447]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:21:01 compute-0 radosgw[93592]: INFO: RGWReshardLock::lock found lock on reshard.0000000014 to be held by another RGW process; skipping for now
Nov 29 06:21:01 compute-0 sudo[94446]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047
Nov 29 06:21:01 compute-0 sudo[94446]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:21:01 compute-0 python3[94460]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v18 --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user create --uid="openstack" --display-name "openstack" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:21:01 compute-0 podman[94474]: 2025-11-29 06:21:01.238527151 +0000 UTC m=+0.072278892 container create 7ef84b47b96b8d1211df6e8194eecae06804c40df4ce45ec949830b04486961d (image=quay.io/ceph/ceph:v18, name=jovial_lalande, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 06:21:01 compute-0 podman[94474]: 2025-11-29 06:21:01.204372448 +0000 UTC m=+0.038124099 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 06:21:01 compute-0 systemd[1]: Started libpod-conmon-7ef84b47b96b8d1211df6e8194eecae06804c40df4ce45ec949830b04486961d.scope.
Nov 29 06:21:01 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:21:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b393204dd4b0132d92cc4649fd8946f9d7ddfc7562cf7f0a7e0ab6fc7b58bcc9/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:21:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b393204dd4b0132d92cc4649fd8946f9d7ddfc7562cf7f0a7e0ab6fc7b58bcc9/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:21:01 compute-0 podman[94474]: 2025-11-29 06:21:01.525423311 +0000 UTC m=+0.359174992 container init 7ef84b47b96b8d1211df6e8194eecae06804c40df4ce45ec949830b04486961d (image=quay.io/ceph/ceph:v18, name=jovial_lalande, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 29 06:21:01 compute-0 podman[94474]: 2025-11-29 06:21:01.56045812 +0000 UTC m=+0.394209761 container start 7ef84b47b96b8d1211df6e8194eecae06804c40df4ce45ec949830b04486961d (image=quay.io/ceph/ceph:v18, name=jovial_lalande, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 29 06:21:01 compute-0 podman[94474]: 2025-11-29 06:21:01.948539093 +0000 UTC m=+0.782290814 container attach 7ef84b47b96b8d1211df6e8194eecae06804c40df4ce45ec949830b04486961d (image=quay.io/ceph/ceph:v18, name=jovial_lalande, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 06:21:02 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v173: 181 pgs: 181 active+clean; 452 KiB data, 81 MiB used, 21 GiB / 21 GiB avail; 1.2 KiB/s rd, 3.8 KiB/s wr, 13 op/s
Nov 29 06:21:02 compute-0 podman[94558]: 2025-11-29 06:21:02.011163596 +0000 UTC m=+0.032888098 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:21:02 compute-0 ceph-mgr[74948]: [progress INFO root] Completed event f0af229f-db58-4777-9300-7823e92993ef (Global Recovery Event) in 58 seconds
Nov 29 06:21:02 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:21:02 compute-0 ceph-mon[74654]: pgmap v171: 181 pgs: 1 creating+peering, 180 active+clean; 450 KiB data, 81 MiB used, 21 GiB / 21 GiB avail; 2.8 KiB/s rd, 341 B/s wr, 3 op/s
Nov 29 06:21:02 compute-0 ceph-mon[74654]: from='client.? ' entity='client.rgw.rgw.compute-2.pkypgd' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Nov 29 06:21:02 compute-0 ceph-mon[74654]: osdmap e54: 3 total, 3 up, 3 in
Nov 29 06:21:02 compute-0 ceph-mon[74654]: mds.? [v2:192.168.122.102:6804/1811763433,v1:192.168.122.102:6805/1811763433] up:boot
Nov 29 06:21:02 compute-0 ceph-mon[74654]: daemon mds.cephfs.compute-2.gxdwyy assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Nov 29 06:21:02 compute-0 ceph-mon[74654]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Nov 29 06:21:02 compute-0 ceph-mon[74654]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Nov 29 06:21:02 compute-0 ceph-mon[74654]: Cluster is now healthy
Nov 29 06:21:02 compute-0 ceph-mon[74654]: fsmap cephfs:0 1 up:standby
Nov 29 06:21:02 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.gxdwyy"}]: dispatch
Nov 29 06:21:02 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:21:02 compute-0 ceph-mon[74654]: fsmap cephfs:1 {0=cephfs.compute-2.gxdwyy=up:creating}
Nov 29 06:21:02 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:21:02 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.jzycnf", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Nov 29 06:21:02 compute-0 ceph-mon[74654]: daemon mds.cephfs.compute-2.gxdwyy is now active in filesystem cephfs as rank 0
Nov 29 06:21:02 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.jzycnf", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Nov 29 06:21:02 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:21:02 compute-0 ceph-mon[74654]: Deploying daemon mds.cephfs.compute-0.jzycnf on compute-0
Nov 29 06:21:02 compute-0 podman[94558]: 2025-11-29 06:21:02.384153935 +0000 UTC m=+0.405878437 container create 1db1f2b60f0c93be3607acca398c23bf947cb7c1ea4b97288f46967706609664 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_mcclintock, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 29 06:21:02 compute-0 systemd[1]: Started libpod-conmon-1db1f2b60f0c93be3607acca398c23bf947cb7c1ea4b97288f46967706609664.scope.
Nov 29 06:21:02 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:21:02 compute-0 podman[94558]: 2025-11-29 06:21:02.661769638 +0000 UTC m=+0.683494110 container init 1db1f2b60f0c93be3607acca398c23bf947cb7c1ea4b97288f46967706609664 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_mcclintock, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 06:21:02 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e54 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 06:21:02 compute-0 podman[94558]: 2025-11-29 06:21:02.67157971 +0000 UTC m=+0.693304172 container start 1db1f2b60f0c93be3607acca398c23bf947cb7c1ea4b97288f46967706609664 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_mcclintock, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 06:21:02 compute-0 nervous_mcclintock[94615]: 167 167
Nov 29 06:21:02 compute-0 podman[94558]: 2025-11-29 06:21:02.676981216 +0000 UTC m=+0.698705678 container attach 1db1f2b60f0c93be3607acca398c23bf947cb7c1ea4b97288f46967706609664 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_mcclintock, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 06:21:02 compute-0 systemd[1]: libpod-1db1f2b60f0c93be3607acca398c23bf947cb7c1ea4b97288f46967706609664.scope: Deactivated successfully.
Nov 29 06:21:02 compute-0 podman[94558]: 2025-11-29 06:21:02.678120909 +0000 UTC m=+0.699845411 container died 1db1f2b60f0c93be3607acca398c23bf947cb7c1ea4b97288f46967706609664 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_mcclintock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 06:21:02 compute-0 jovial_lalande[94498]: {
Nov 29 06:21:02 compute-0 jovial_lalande[94498]:     "user_id": "openstack",
Nov 29 06:21:02 compute-0 jovial_lalande[94498]:     "display_name": "openstack",
Nov 29 06:21:02 compute-0 jovial_lalande[94498]:     "email": "",
Nov 29 06:21:02 compute-0 jovial_lalande[94498]:     "suspended": 0,
Nov 29 06:21:02 compute-0 jovial_lalande[94498]:     "max_buckets": 1000,
Nov 29 06:21:02 compute-0 jovial_lalande[94498]:     "subusers": [],
Nov 29 06:21:02 compute-0 jovial_lalande[94498]:     "keys": [
Nov 29 06:21:02 compute-0 jovial_lalande[94498]:         {
Nov 29 06:21:02 compute-0 jovial_lalande[94498]:             "user": "openstack",
Nov 29 06:21:02 compute-0 jovial_lalande[94498]:             "access_key": "R6E8YK4W4T3CTN23FBKD",
Nov 29 06:21:02 compute-0 jovial_lalande[94498]:             "secret_key": "y5AKHfabfxYBgWBxC6rwwMHQuHvZBwkmJTopzDw5"
Nov 29 06:21:02 compute-0 jovial_lalande[94498]:         }
Nov 29 06:21:02 compute-0 jovial_lalande[94498]:     ],
Nov 29 06:21:02 compute-0 jovial_lalande[94498]:     "swift_keys": [],
Nov 29 06:21:02 compute-0 jovial_lalande[94498]:     "caps": [],
Nov 29 06:21:02 compute-0 jovial_lalande[94498]:     "op_mask": "read, write, delete",
Nov 29 06:21:02 compute-0 jovial_lalande[94498]:     "default_placement": "",
Nov 29 06:21:02 compute-0 jovial_lalande[94498]:     "default_storage_class": "",
Nov 29 06:21:02 compute-0 jovial_lalande[94498]:     "placement_tags": [],
Nov 29 06:21:02 compute-0 jovial_lalande[94498]:     "bucket_quota": {
Nov 29 06:21:02 compute-0 jovial_lalande[94498]:         "enabled": false,
Nov 29 06:21:02 compute-0 jovial_lalande[94498]:         "check_on_raw": false,
Nov 29 06:21:02 compute-0 jovial_lalande[94498]:         "max_size": -1,
Nov 29 06:21:02 compute-0 jovial_lalande[94498]:         "max_size_kb": 0,
Nov 29 06:21:02 compute-0 jovial_lalande[94498]:         "max_objects": -1
Nov 29 06:21:02 compute-0 jovial_lalande[94498]:     },
Nov 29 06:21:02 compute-0 jovial_lalande[94498]:     "user_quota": {
Nov 29 06:21:02 compute-0 jovial_lalande[94498]:         "enabled": false,
Nov 29 06:21:02 compute-0 jovial_lalande[94498]:         "check_on_raw": false,
Nov 29 06:21:02 compute-0 jovial_lalande[94498]:         "max_size": -1,
Nov 29 06:21:02 compute-0 jovial_lalande[94498]:         "max_size_kb": 0,
Nov 29 06:21:02 compute-0 jovial_lalande[94498]:         "max_objects": -1
Nov 29 06:21:02 compute-0 jovial_lalande[94498]:     },
Nov 29 06:21:02 compute-0 jovial_lalande[94498]:     "temp_url_keys": [],
Nov 29 06:21:02 compute-0 jovial_lalande[94498]:     "type": "rgw",
Nov 29 06:21:02 compute-0 jovial_lalande[94498]:     "mfa_ids": []
Nov 29 06:21:02 compute-0 jovial_lalande[94498]: }
Nov 29 06:21:02 compute-0 jovial_lalande[94498]: 
Nov 29 06:21:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-c47be74bf5cdd88d44d52b523ef1ba9f2782e219fdbe17e185710c63fdbfadf5-merged.mount: Deactivated successfully.
Nov 29 06:21:02 compute-0 podman[94558]: 2025-11-29 06:21:02.723106934 +0000 UTC m=+0.744831426 container remove 1db1f2b60f0c93be3607acca398c23bf947cb7c1ea4b97288f46967706609664 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_mcclintock, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 06:21:02 compute-0 systemd[1]: libpod-conmon-1db1f2b60f0c93be3607acca398c23bf947cb7c1ea4b97288f46967706609664.scope: Deactivated successfully.
Nov 29 06:21:02 compute-0 systemd[1]: libpod-7ef84b47b96b8d1211df6e8194eecae06804c40df4ce45ec949830b04486961d.scope: Deactivated successfully.
Nov 29 06:21:02 compute-0 podman[94474]: 2025-11-29 06:21:02.767893794 +0000 UTC m=+1.601645435 container died 7ef84b47b96b8d1211df6e8194eecae06804c40df4ce45ec949830b04486961d (image=quay.io/ceph/ceph:v18, name=jovial_lalande, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 29 06:21:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-b393204dd4b0132d92cc4649fd8946f9d7ddfc7562cf7f0a7e0ab6fc7b58bcc9-merged.mount: Deactivated successfully.
Nov 29 06:21:02 compute-0 systemd[1]: Reloading.
Nov 29 06:21:02 compute-0 systemd-rc-local-generator[94687]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 06:21:02 compute-0 systemd-sysv-generator[94698]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 06:21:02 compute-0 podman[94474]: 2025-11-29 06:21:02.969456266 +0000 UTC m=+1.803207897 container remove 7ef84b47b96b8d1211df6e8194eecae06804c40df4ce45ec949830b04486961d (image=quay.io/ceph/ceph:v18, name=jovial_lalande, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 29 06:21:02 compute-0 sudo[94447]: pam_unix(sudo:session): session closed for user root
Nov 29 06:21:03 compute-0 systemd[1]: libpod-conmon-7ef84b47b96b8d1211df6e8194eecae06804c40df4ce45ec949830b04486961d.scope: Deactivated successfully.
Nov 29 06:21:03 compute-0 systemd[1]: Reloading.
Nov 29 06:21:03 compute-0 systemd-rc-local-generator[94731]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 06:21:03 compute-0 systemd-sysv-generator[94734]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 06:21:03 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).mds e5 new map
Nov 29 06:21:03 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).mds e5 print_map
                                           e5
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        5
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-11-29T06:19:35.588785+0000
                                           modified        2025-11-29T06:21:01.949294+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        0
                                           up        {0=24145}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           [mds.cephfs.compute-2.gxdwyy{0:24145} state up:active seq 2 addr [v2:192.168.122.102:6804/1811763433,v1:192.168.122.102:6805/1811763433] compat {c=[1],r=[1],i=[7ff]}]
                                            
                                            
Nov 29 06:21:03 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.102:6804/1811763433,v1:192.168.122.102:6805/1811763433] up:active
Nov 29 06:21:03 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.gxdwyy=up:active}
Nov 29 06:21:03 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 6.4 scrub starts
Nov 29 06:21:03 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 6.4 scrub ok
Nov 29 06:21:03 compute-0 systemd[1]: Starting Ceph mds.cephfs.compute-0.jzycnf for 336ec58c-893b-528f-a0c1-6ed1196bc047...
Nov 29 06:21:04 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v174: 181 pgs: 181 active+clean; 452 KiB data, 81 MiB used, 21 GiB / 21 GiB avail; 1.0 KiB/s rd, 3.3 KiB/s wr, 11 op/s
Nov 29 06:21:04 compute-0 podman[94791]: 2025-11-29 06:21:03.912716504 +0000 UTC m=+0.024891758 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:21:04 compute-0 ceph-mon[74654]: pgmap v173: 181 pgs: 181 active+clean; 452 KiB data, 81 MiB used, 21 GiB / 21 GiB avail; 1.2 KiB/s rd, 3.8 KiB/s wr, 13 op/s
Nov 29 06:21:04 compute-0 ceph-mon[74654]: 7.1c scrub starts
Nov 29 06:21:04 compute-0 ceph-mon[74654]: 7.1c scrub ok
Nov 29 06:21:04 compute-0 ceph-mon[74654]: 4.18 scrub starts
Nov 29 06:21:04 compute-0 ceph-mon[74654]: 4.18 scrub ok
Nov 29 06:21:04 compute-0 podman[94791]: 2025-11-29 06:21:04.230774511 +0000 UTC m=+0.342949685 container create 4848c8d8bb5fa4a7cc59121390b320b141644cb5003af1bd82d97c12a873a76b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mds-cephfs-compute-0-jzycnf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 29 06:21:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52f32100da6d4ec69e9aaf0d3ee2060f68728b7b6dc4bad2bf83446d612ba8b9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:21:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52f32100da6d4ec69e9aaf0d3ee2060f68728b7b6dc4bad2bf83446d612ba8b9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:21:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52f32100da6d4ec69e9aaf0d3ee2060f68728b7b6dc4bad2bf83446d612ba8b9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 06:21:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52f32100da6d4ec69e9aaf0d3ee2060f68728b7b6dc4bad2bf83446d612ba8b9/merged/var/lib/ceph/mds/ceph-cephfs.compute-0.jzycnf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:21:04 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 6.6 scrub starts
Nov 29 06:21:04 compute-0 podman[94791]: 2025-11-29 06:21:04.664809817 +0000 UTC m=+0.776985091 container init 4848c8d8bb5fa4a7cc59121390b320b141644cb5003af1bd82d97c12a873a76b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mds-cephfs-compute-0-jzycnf, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 29 06:21:04 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 6.6 scrub ok
Nov 29 06:21:04 compute-0 podman[94791]: 2025-11-29 06:21:04.675843045 +0000 UTC m=+0.788018259 container start 4848c8d8bb5fa4a7cc59121390b320b141644cb5003af1bd82d97c12a873a76b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mds-cephfs-compute-0-jzycnf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 29 06:21:04 compute-0 ceph-mds[94810]: set uid:gid to 167:167 (ceph:ceph)
Nov 29 06:21:04 compute-0 ceph-mds[94810]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mds, pid 2
Nov 29 06:21:04 compute-0 ceph-mds[94810]: main not setting numa affinity
Nov 29 06:21:04 compute-0 ceph-mds[94810]: pidfile_write: ignore empty --pid-file
Nov 29 06:21:04 compute-0 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mds-cephfs-compute-0-jzycnf[94806]: starting mds.cephfs.compute-0.jzycnf at 
Nov 29 06:21:04 compute-0 ceph-mds[94810]: mds.cephfs.compute-0.jzycnf Updating MDS map to version 5 from mon.0
Nov 29 06:21:05 compute-0 bash[94791]: 4848c8d8bb5fa4a7cc59121390b320b141644cb5003af1bd82d97c12a873a76b
Nov 29 06:21:05 compute-0 systemd[1]: Started Ceph mds.cephfs.compute-0.jzycnf for 336ec58c-893b-528f-a0c1-6ed1196bc047.
Nov 29 06:21:05 compute-0 sudo[94446]: pam_unix(sudo:session): session closed for user root
Nov 29 06:21:05 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 06:21:05 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).mds e6 new map
Nov 29 06:21:05 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).mds e6 print_map
                                           e6
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        5
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-11-29T06:19:35.588785+0000
                                           modified        2025-11-29T06:21:01.949294+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        0
                                           up        {0=24145}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           [mds.cephfs.compute-2.gxdwyy{0:24145} state up:active seq 2 addr [v2:192.168.122.102:6804/1811763433,v1:192.168.122.102:6805/1811763433] compat {c=[1],r=[1],i=[7ff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.jzycnf{-1:14409} state up:standby seq 1 addr [v2:192.168.122.100:6806/3521074432,v1:192.168.122.100:6807/3521074432] compat {c=[1],r=[1],i=[7ff]}]
Nov 29 06:21:05 compute-0 ceph-mon[74654]: mds.? [v2:192.168.122.102:6804/1811763433,v1:192.168.122.102:6805/1811763433] up:active
Nov 29 06:21:05 compute-0 ceph-mon[74654]: fsmap cephfs:1 {0=cephfs.compute-2.gxdwyy=up:active}
Nov 29 06:21:05 compute-0 ceph-mon[74654]: 6.4 scrub starts
Nov 29 06:21:05 compute-0 ceph-mon[74654]: 6.4 scrub ok
Nov 29 06:21:05 compute-0 ceph-mon[74654]: pgmap v174: 181 pgs: 181 active+clean; 452 KiB data, 81 MiB used, 21 GiB / 21 GiB avail; 1.0 KiB/s rd, 3.3 KiB/s wr, 11 op/s
Nov 29 06:21:05 compute-0 ceph-mon[74654]: 4.13 scrub starts
Nov 29 06:21:05 compute-0 ceph-mon[74654]: 4.13 scrub ok
Nov 29 06:21:05 compute-0 ceph-mon[74654]: 6.6 scrub starts
Nov 29 06:21:05 compute-0 ceph-mon[74654]: 6.6 scrub ok
Nov 29 06:21:05 compute-0 ceph-mds[94810]: mds.cephfs.compute-0.jzycnf Updating MDS map to version 6 from mon.0
Nov 29 06:21:05 compute-0 ceph-mds[94810]: mds.cephfs.compute-0.jzycnf Monitors have assigned me to become a standby.
Nov 29 06:21:06 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v175: 181 pgs: 181 active+clean; 456 KiB data, 81 MiB used, 21 GiB / 21 GiB avail; 122 KiB/s rd, 5.6 KiB/s wr, 219 op/s
Nov 29 06:21:06 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 06:21:06 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:21:06 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 06:21:06 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:21:06 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:21:06 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:21:06 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:21:06 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:21:06 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:21:06 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:21:06 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:21:06 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:21:06 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 06:21:06 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:21:06 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:21:06 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:21:06 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 1)
Nov 29 06:21:06 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:21:06 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 1)
Nov 29 06:21:06 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:21:06 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 29 06:21:06 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:21:06 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 1)
Nov 29 06:21:06 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6806/3521074432,v1:192.168.122.100:6807/3521074432] up:boot
Nov 29 06:21:06 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.gxdwyy=up:active} 1 up:standby
Nov 29 06:21:06 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.jzycnf"} v 0) v1
Nov 29 06:21:06 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.jzycnf"}]: dispatch
Nov 29 06:21:06 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).mds e6 all = 0
Nov 29 06:21:06 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} v 0) v1
Nov 29 06:21:06 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 06:21:06 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).mds e7 new map
Nov 29 06:21:06 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).mds e7 print_map
                                           e7
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        5
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-11-29T06:19:35.588785+0000
                                           modified        2025-11-29T06:21:01.949294+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        0
                                           up        {0=24145}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           [mds.cephfs.compute-2.gxdwyy{0:24145} state up:active seq 2 addr [v2:192.168.122.102:6804/1811763433,v1:192.168.122.102:6805/1811763433] compat {c=[1],r=[1],i=[7ff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.jzycnf{-1:14409} state up:standby seq 1 addr [v2:192.168.122.100:6806/3521074432,v1:192.168.122.100:6807/3521074432] compat {c=[1],r=[1],i=[7ff]}]
Nov 29 06:21:06 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:21:06 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.gxdwyy=up:active} 1 up:standby
Nov 29 06:21:06 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 06:21:06 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:21:06 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Nov 29 06:21:06 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:21:06 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.vlqnad", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) v1
Nov 29 06:21:06 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.vlqnad", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Nov 29 06:21:06 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.vlqnad", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Nov 29 06:21:06 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 06:21:06 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:21:06 compute-0 ceph-mgr[74948]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-1.vlqnad on compute-1
Nov 29 06:21:06 compute-0 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-1.vlqnad on compute-1
Nov 29 06:21:06 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 6.9 deep-scrub starts
Nov 29 06:21:06 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 6.9 deep-scrub ok
Nov 29 06:21:06 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e54 do_prune osdmap full prune enabled
Nov 29 06:21:07 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Nov 29 06:21:07 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e55 e55: 3 total, 3 up, 3 in
Nov 29 06:21:07 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e55: 3 total, 3 up, 3 in
Nov 29 06:21:07 compute-0 ceph-mgr[74948]: [progress INFO root] update: starting ev 4e68207f-6124-4e17-a6a7-080c35b0b4fc (PG autoscaler increasing pool 8 PGs from 1 to 32)
Nov 29 06:21:07 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} v 0) v1
Nov 29 06:21:07 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 06:21:07 compute-0 ceph-mon[74654]: 5.12 deep-scrub starts
Nov 29 06:21:07 compute-0 ceph-mon[74654]: 5.12 deep-scrub ok
Nov 29 06:21:07 compute-0 ceph-mon[74654]: pgmap v175: 181 pgs: 181 active+clean; 456 KiB data, 81 MiB used, 21 GiB / 21 GiB avail; 122 KiB/s rd, 5.6 KiB/s wr, 219 op/s
Nov 29 06:21:07 compute-0 ceph-mon[74654]: 4.c scrub starts
Nov 29 06:21:07 compute-0 ceph-mon[74654]: 4.c scrub ok
Nov 29 06:21:07 compute-0 ceph-mon[74654]: mds.? [v2:192.168.122.100:6806/3521074432,v1:192.168.122.100:6807/3521074432] up:boot
Nov 29 06:21:07 compute-0 ceph-mon[74654]: fsmap cephfs:1 {0=cephfs.compute-2.gxdwyy=up:active} 1 up:standby
Nov 29 06:21:07 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.jzycnf"}]: dispatch
Nov 29 06:21:07 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 06:21:07 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:21:07 compute-0 ceph-mon[74654]: fsmap cephfs:1 {0=cephfs.compute-2.gxdwyy=up:active} 1 up:standby
Nov 29 06:21:07 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:21:07 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:21:07 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.vlqnad", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Nov 29 06:21:07 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.vlqnad", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Nov 29 06:21:07 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:21:07 compute-0 ceph-mon[74654]: Deploying daemon mds.cephfs.compute-1.vlqnad on compute-1
Nov 29 06:21:07 compute-0 ceph-mon[74654]: 6.9 deep-scrub starts
Nov 29 06:21:07 compute-0 ceph-mon[74654]: 6.9 deep-scrub ok
Nov 29 06:21:07 compute-0 ceph-mgr[74948]: [progress INFO root] Writing back 13 completed events
Nov 29 06:21:07 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 29 06:21:07 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e55 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 06:21:07 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:21:08 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v177: 181 pgs: 181 active+clean; 456 KiB data, 81 MiB used, 21 GiB / 21 GiB avail; 122 KiB/s rd, 5.6 KiB/s wr, 219 op/s
Nov 29 06:21:08 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 29 06:21:08 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 06:21:08 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e55 do_prune osdmap full prune enabled
Nov 29 06:21:08 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Nov 29 06:21:08 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 06:21:08 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e56 e56: 3 total, 3 up, 3 in
Nov 29 06:21:08 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e56: 3 total, 3 up, 3 in
Nov 29 06:21:08 compute-0 ceph-mgr[74948]: [progress INFO root] update: starting ev 8a5a0651-b0e6-4304-8cea-03dbf2437fb2 (PG autoscaler increasing pool 9 PGs from 1 to 32)
Nov 29 06:21:08 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} v 0) v1
Nov 29 06:21:08 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 06:21:08 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Nov 29 06:21:08 compute-0 ceph-mon[74654]: osdmap e55: 3 total, 3 up, 3 in
Nov 29 06:21:08 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 06:21:08 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:21:08 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 06:21:08 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Nov 29 06:21:08 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 06:21:08 compute-0 ceph-mon[74654]: osdmap e56: 3 total, 3 up, 3 in
Nov 29 06:21:08 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 06:21:08 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 56 pg[8.0( v 46'4 (0'0,46'4] local-lis/les=45/46 n=4 ec=45/45 lis/c=45/45 les/c/f=46/46/0 sis=56 pruub=11.837022781s) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 lcod 46'3 mlcod 46'3 active pruub 164.471450806s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:21:08 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 56 pg[8.0( v 46'4 lc 0'0 (0'0,46'4] local-lis/les=45/46 n=0 ec=45/45 lis/c=45/45 les/c/f=46/46/0 sis=56 pruub=11.837022781s) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 lcod 46'3 mlcod 0'0 unknown pruub 164.471450806s@ mbc={}] state<Start>: transitioning to Primary
Nov 29 06:21:08 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 06:21:09 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e56 do_prune osdmap full prune enabled
Nov 29 06:21:09 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 6.b scrub starts
Nov 29 06:21:09 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 6.b scrub ok
Nov 29 06:21:10 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v179: 212 pgs: 31 unknown, 181 active+clean; 456 KiB data, 81 MiB used, 21 GiB / 21 GiB avail; 121 KiB/s rd, 2.7 KiB/s wr, 209 op/s
Nov 29 06:21:10 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 29 06:21:10 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 06:21:11 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Nov 29 06:21:11 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e57 e57: 3 total, 3 up, 3 in
Nov 29 06:21:11 compute-0 ceph-mon[74654]: pgmap v177: 181 pgs: 181 active+clean; 456 KiB data, 81 MiB used, 21 GiB / 21 GiB avail; 122 KiB/s rd, 5.6 KiB/s wr, 219 op/s
Nov 29 06:21:11 compute-0 ceph-mon[74654]: 6.e scrub starts
Nov 29 06:21:11 compute-0 ceph-mon[74654]: 6.e scrub ok
Nov 29 06:21:11 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:21:11 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e57: 3 total, 3 up, 3 in
Nov 29 06:21:11 compute-0 ceph-mgr[74948]: [progress INFO root] update: starting ev e2c2442c-2f75-44ac-aff9-5dadde01ae6c (PG autoscaler increasing pool 10 PGs from 1 to 32)
Nov 29 06:21:11 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 6.c scrub starts
Nov 29 06:21:11 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 06:21:11 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} v 0) v1
Nov 29 06:21:11 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 06:21:12 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v181: 212 pgs: 1 peering, 31 unknown, 180 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 107 KiB/s rd, 0 B/s wr, 198 op/s
Nov 29 06:21:12 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 29 06:21:12 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 06:21:12 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 29 06:21:12 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 06:21:12 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e57 do_prune osdmap full prune enabled
Nov 29 06:21:12 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 57 pg[8.16( v 46'4 lc 0'0 (0'0,46'4] local-lis/les=45/46 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:21:12 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 57 pg[8.2( v 46'4 lc 0'0 (0'0,46'4] local-lis/les=45/46 n=1 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:21:12 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 57 pg[8.f( v 46'4 lc 0'0 (0'0,46'4] local-lis/les=45/46 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:21:12 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 57 pg[8.1( v 46'4 lc 0'0 (0'0,46'4] local-lis/les=45/46 n=1 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:21:12 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 57 pg[8.18( v 46'4 lc 0'0 (0'0,46'4] local-lis/les=45/46 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:21:12 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 57 pg[8.9( v 46'4 lc 0'0 (0'0,46'4] local-lis/les=45/46 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:21:12 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 57 pg[8.a( v 46'4 lc 0'0 (0'0,46'4] local-lis/les=45/46 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:21:12 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 57 pg[8.13( v 46'4 lc 0'0 (0'0,46'4] local-lis/les=45/46 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:21:12 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 57 pg[8.11( v 46'4 lc 0'0 (0'0,46'4] local-lis/les=45/46 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:21:12 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 57 pg[8.3( v 46'4 lc 0'0 (0'0,46'4] local-lis/les=45/46 n=1 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:21:12 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 57 pg[8.1f( v 46'4 lc 0'0 (0'0,46'4] local-lis/les=45/46 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:21:12 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 57 pg[8.19( v 46'4 lc 0'0 (0'0,46'4] local-lis/les=45/46 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:21:12 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 57 pg[8.1a( v 46'4 lc 0'0 (0'0,46'4] local-lis/les=45/46 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:21:12 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 57 pg[8.8( v 46'4 lc 0'0 (0'0,46'4] local-lis/les=45/46 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:21:12 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 57 pg[8.15( v 46'4 lc 0'0 (0'0,46'4] local-lis/les=45/46 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:21:12 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 57 pg[8.b( v 46'4 lc 0'0 (0'0,46'4] local-lis/les=45/46 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:21:12 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 57 pg[8.c( v 46'4 lc 0'0 (0'0,46'4] local-lis/les=45/46 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:21:12 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 57 pg[8.d( v 46'4 lc 0'0 (0'0,46'4] local-lis/les=45/46 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:21:12 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 57 pg[8.e( v 46'4 lc 0'0 (0'0,46'4] local-lis/les=45/46 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:21:12 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 57 pg[8.14( v 46'4 lc 0'0 (0'0,46'4] local-lis/les=45/46 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:21:12 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 57 pg[8.6( v 46'4 lc 0'0 (0'0,46'4] local-lis/les=45/46 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:21:12 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 57 pg[8.7( v 46'4 lc 0'0 (0'0,46'4] local-lis/les=45/46 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:21:12 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 57 pg[8.5( v 46'4 lc 0'0 (0'0,46'4] local-lis/les=45/46 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:21:12 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 57 pg[8.4( v 46'4 lc 0'0 (0'0,46'4] local-lis/les=45/46 n=1 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:21:12 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 57 pg[8.1e( v 46'4 lc 0'0 (0'0,46'4] local-lis/les=45/46 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:21:12 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 57 pg[8.1b( v 46'4 lc 0'0 (0'0,46'4] local-lis/les=45/46 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:21:12 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 57 pg[8.1d( v 46'4 lc 0'0 (0'0,46'4] local-lis/les=45/46 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:21:12 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 57 pg[8.1c( v 46'4 lc 0'0 (0'0,46'4] local-lis/les=45/46 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:21:12 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 57 pg[8.10( v 46'4 lc 0'0 (0'0,46'4] local-lis/les=45/46 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:21:12 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 57 pg[8.12( v 46'4 lc 0'0 (0'0,46'4] local-lis/les=45/46 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:21:12 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 57 pg[8.17( v 46'4 lc 0'0 (0'0,46'4] local-lis/les=45/46 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:21:12 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 6.c scrub ok
Nov 29 06:21:12 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 6.f scrub starts
Nov 29 06:21:12 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).mds e8 new map
Nov 29 06:21:12 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).mds e8 print_map
                                           e8
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        5
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-11-29T06:19:35.588785+0000
                                           modified        2025-11-29T06:21:01.949294+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        0
                                           up        {0=24145}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           [mds.cephfs.compute-2.gxdwyy{0:24145} state up:active seq 2 addr [v2:192.168.122.102:6804/1811763433,v1:192.168.122.102:6805/1811763433] compat {c=[1],r=[1],i=[7ff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.jzycnf{-1:14409} state up:standby seq 1 addr [v2:192.168.122.100:6806/3521074432,v1:192.168.122.100:6807/3521074432] compat {c=[1],r=[1],i=[7ff]}]
                                           [mds.cephfs.compute-1.vlqnad{-1:24131} state up:standby seq 1 addr [v2:192.168.122.101:6804/3552238207,v1:192.168.122.101:6805/3552238207] compat {c=[1],r=[1],i=[7ff]}]
Nov 29 06:21:12 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 06:21:12 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Nov 29 06:21:12 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 06:21:12 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 06:21:12 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e58 e58: 3 total, 3 up, 3 in
Nov 29 06:21:12 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 57 pg[8.f( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:21:12 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 57 pg[8.18( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:21:12 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 57 pg[8.9( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:21:12 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 57 pg[8.1( v 46'4 (0'0,46'4] local-lis/les=56/57 n=1 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:21:12 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 57 pg[8.2( v 46'4 (0'0,46'4] local-lis/les=56/57 n=1 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:21:12 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 57 pg[8.a( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:21:12 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 57 pg[8.13( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:21:12 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 57 pg[8.3( v 46'4 (0'0,46'4] local-lis/les=56/57 n=1 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:21:12 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 57 pg[8.19( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:21:12 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 57 pg[8.1f( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:21:12 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 57 pg[8.8( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:21:12 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 57 pg[8.15( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:21:12 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 57 pg[8.1a( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:21:12 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 57 pg[8.b( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:21:12 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 57 pg[8.c( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:21:12 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 57 pg[8.d( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:21:12 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 57 pg[8.11( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:21:12 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 57 pg[8.e( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:21:12 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 57 pg[8.14( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:21:12 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 57 pg[8.6( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:21:12 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 57 pg[8.0( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=45/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 lcod 46'3 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:21:12 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 57 pg[8.7( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:21:12 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 57 pg[8.5( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:21:12 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 57 pg[8.1e( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:21:12 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 57 pg[8.16( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:21:12 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 57 pg[8.1b( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:21:12 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 57 pg[8.1d( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:21:12 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 57 pg[8.1c( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:21:12 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 57 pg[8.4( v 46'4 (0'0,46'4] local-lis/les=56/57 n=1 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:21:12 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 57 pg[8.12( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:21:12 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 57 pg[8.10( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:21:12 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 57 pg[8.17( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:21:12 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 6.f scrub ok
Nov 29 06:21:12 compute-0 ceph-mon[74654]: 2.13 scrub starts
Nov 29 06:21:12 compute-0 ceph-mon[74654]: 2.13 scrub ok
Nov 29 06:21:12 compute-0 ceph-mon[74654]: 6.b scrub starts
Nov 29 06:21:12 compute-0 ceph-mon[74654]: 6.b scrub ok
Nov 29 06:21:12 compute-0 ceph-mon[74654]: pgmap v179: 212 pgs: 31 unknown, 181 active+clean; 456 KiB data, 81 MiB used, 21 GiB / 21 GiB avail; 121 KiB/s rd, 2.7 KiB/s wr, 209 op/s
Nov 29 06:21:12 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 06:21:12 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Nov 29 06:21:12 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:21:12 compute-0 ceph-mon[74654]: osdmap e57: 3 total, 3 up, 3 in
Nov 29 06:21:12 compute-0 ceph-mon[74654]: 6.c scrub starts
Nov 29 06:21:12 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 06:21:12 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 06:21:12 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 06:21:12 compute-0 ceph-mgr[74948]: mgr.server handle_open ignoring open from mds.cephfs.compute-1.vlqnad v2:192.168.122.101:6804/3552238207; not ready for session (expect reconnect)
Nov 29 06:21:12 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:21:12 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.101:6804/3552238207,v1:192.168.122.101:6805/3552238207] up:boot
Nov 29 06:21:12 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.gxdwyy=up:active} 2 up:standby
Nov 29 06:21:12 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e58: 3 total, 3 up, 3 in
Nov 29 06:21:12 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e58 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 06:21:12 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-1.vlqnad"} v 0) v1
Nov 29 06:21:12 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-1.vlqnad"}]: dispatch
Nov 29 06:21:12 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).mds e8 all = 0
Nov 29 06:21:12 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Nov 29 06:21:12 compute-0 ceph-mgr[74948]: [progress INFO root] update: starting ev 8a7fbd39-793f-459a-93ff-e5f5e3bb9609 (PG autoscaler increasing pool 11 PGs from 1 to 32)
Nov 29 06:21:12 compute-0 ceph-mgr[74948]: [progress INFO root] complete: finished ev 4e68207f-6124-4e17-a6a7-080c35b0b4fc (PG autoscaler increasing pool 8 PGs from 1 to 32)
Nov 29 06:21:12 compute-0 ceph-mgr[74948]: [progress INFO root] Completed event 4e68207f-6124-4e17-a6a7-080c35b0b4fc (PG autoscaler increasing pool 8 PGs from 1 to 32) in 6 seconds
Nov 29 06:21:12 compute-0 ceph-mgr[74948]: [progress INFO root] complete: finished ev 8a5a0651-b0e6-4304-8cea-03dbf2437fb2 (PG autoscaler increasing pool 9 PGs from 1 to 32)
Nov 29 06:21:12 compute-0 ceph-mgr[74948]: [progress INFO root] Completed event 8a5a0651-b0e6-4304-8cea-03dbf2437fb2 (PG autoscaler increasing pool 9 PGs from 1 to 32) in 5 seconds
Nov 29 06:21:12 compute-0 ceph-mgr[74948]: [progress INFO root] complete: finished ev e2c2442c-2f75-44ac-aff9-5dadde01ae6c (PG autoscaler increasing pool 10 PGs from 1 to 32)
Nov 29 06:21:12 compute-0 ceph-mgr[74948]: [progress INFO root] Completed event e2c2442c-2f75-44ac-aff9-5dadde01ae6c (PG autoscaler increasing pool 10 PGs from 1 to 32) in 1 seconds
Nov 29 06:21:12 compute-0 ceph-mgr[74948]: [progress INFO root] complete: finished ev 8a7fbd39-793f-459a-93ff-e5f5e3bb9609 (PG autoscaler increasing pool 11 PGs from 1 to 32)
Nov 29 06:21:12 compute-0 ceph-mgr[74948]: [progress INFO root] Completed event 8a7fbd39-793f-459a-93ff-e5f5e3bb9609 (PG autoscaler increasing pool 11 PGs from 1 to 32) in 0 seconds
Nov 29 06:21:12 compute-0 ceph-mgr[74948]: [progress INFO root] Writing back 17 completed events
Nov 29 06:21:12 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 29 06:21:12 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 58 pg[9.0( v 56'1130 (0'0,56'1130] local-lis/les=47/48 n=177 ec=47/47 lis/c=47/47 les/c/f=48/48/0 sis=58 pruub=10.116784096s) [1] r=0 lpr=58 pi=[47,58)/1 crt=56'1130 lcod 56'1129 mlcod 56'1129 active pruub 167.403060913s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:21:12 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 58 pg[9.0( v 56'1130 lc 0'0 (0'0,56'1130] local-lis/les=47/48 n=5 ec=47/47 lis/c=47/47 les/c/f=48/48/0 sis=58 pruub=10.116784096s) [1] r=0 lpr=58 pi=[47,58)/1 crt=56'1130 lcod 56'1129 mlcod 0'0 unknown pruub 167.403060913s@ mbc={}] state<Start>: transitioning to Primary
Nov 29 06:21:12 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:21:12 compute-0 ceph-mgr[74948]: [progress INFO root] complete: finished ev f69c7611-808e-4a28-94ca-4532cf709bfe (Updating mds.cephfs deployment (+3 -> 3))
Nov 29 06:21:12 compute-0 ceph-mgr[74948]: [progress INFO root] Completed event f69c7611-808e-4a28-94ca-4532cf709bfe (Updating mds.cephfs deployment (+3 -> 3)) in 16 seconds
Nov 29 06:21:13 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mds_join_fs}] v 0) v1
Nov 29 06:21:13 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e58 do_prune osdmap full prune enabled
Nov 29 06:21:13 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:21:14 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v183: 274 pgs: 1 peering, 93 unknown, 180 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 107 KiB/s rd, 0 B/s wr, 198 op/s
Nov 29 06:21:14 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 29 06:21:14 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 06:21:14 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e59 e59: 3 total, 3 up, 3 in
Nov 29 06:21:14 compute-0 ceph-mon[74654]: pgmap v181: 212 pgs: 1 peering, 31 unknown, 180 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 107 KiB/s rd, 0 B/s wr, 198 op/s
Nov 29 06:21:14 compute-0 ceph-mon[74654]: 6.c scrub ok
Nov 29 06:21:14 compute-0 ceph-mon[74654]: 6.f scrub starts
Nov 29 06:21:14 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 06:21:14 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Nov 29 06:21:14 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 06:21:14 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 06:21:14 compute-0 ceph-mon[74654]: 6.f scrub ok
Nov 29 06:21:14 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:21:14 compute-0 ceph-mon[74654]: mds.? [v2:192.168.122.101:6804/3552238207,v1:192.168.122.101:6805/3552238207] up:boot
Nov 29 06:21:14 compute-0 ceph-mon[74654]: fsmap cephfs:1 {0=cephfs.compute-2.gxdwyy=up:active} 2 up:standby
Nov 29 06:21:14 compute-0 ceph-mon[74654]: osdmap e58: 3 total, 3 up, 3 in
Nov 29 06:21:14 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-1.vlqnad"}]: dispatch
Nov 29 06:21:14 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:21:14 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:21:14 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e59: 3 total, 3 up, 3 in
Nov 29 06:21:14 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Nov 29 06:21:14 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 59 pg[9.19( v 56'1130 lc 0'0 (0'0,56'1130] local-lis/les=47/48 n=5 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [1] r=0 lpr=58 pi=[47,58)/1 crt=56'1130 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:21:14 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 59 pg[9.3( v 56'1130 lc 0'0 (0'0,56'1130] local-lis/les=47/48 n=6 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [1] r=0 lpr=58 pi=[47,58)/1 crt=56'1130 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:21:14 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 59 pg[9.e( v 56'1130 lc 0'0 (0'0,56'1130] local-lis/les=47/48 n=6 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [1] r=0 lpr=58 pi=[47,58)/1 crt=56'1130 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:21:14 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 59 pg[9.8( v 56'1130 lc 0'0 (0'0,56'1130] local-lis/les=47/48 n=6 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [1] r=0 lpr=58 pi=[47,58)/1 crt=56'1130 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:21:14 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 59 pg[9.b( v 56'1130 lc 0'0 (0'0,56'1130] local-lis/les=47/48 n=6 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [1] r=0 lpr=58 pi=[47,58)/1 crt=56'1130 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:21:14 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 59 pg[9.17( v 56'1130 lc 0'0 (0'0,56'1130] local-lis/les=47/48 n=5 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [1] r=0 lpr=58 pi=[47,58)/1 crt=56'1130 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:21:14 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 59 pg[9.12( v 56'1130 lc 0'0 (0'0,56'1130] local-lis/les=47/48 n=5 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [1] r=0 lpr=58 pi=[47,58)/1 crt=56'1130 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:21:14 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 59 pg[9.10( v 56'1130 lc 0'0 (0'0,56'1130] local-lis/les=47/48 n=6 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [1] r=0 lpr=58 pi=[47,58)/1 crt=56'1130 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:21:14 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 59 pg[9.2( v 56'1130 lc 0'0 (0'0,56'1130] local-lis/les=47/48 n=6 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [1] r=0 lpr=58 pi=[47,58)/1 crt=56'1130 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:21:14 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 59 pg[9.1e( v 56'1130 lc 0'0 (0'0,56'1130] local-lis/les=47/48 n=5 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [1] r=0 lpr=58 pi=[47,58)/1 crt=56'1130 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:21:14 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 59 pg[9.18( v 56'1130 lc 0'0 (0'0,56'1130] local-lis/les=47/48 n=5 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [1] r=0 lpr=58 pi=[47,58)/1 crt=56'1130 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:21:14 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 59 pg[9.1b( v 56'1130 lc 0'0 (0'0,56'1130] local-lis/les=47/48 n=5 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [1] r=0 lpr=58 pi=[47,58)/1 crt=56'1130 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:21:14 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 59 pg[9.9( v 56'1130 lc 0'0 (0'0,56'1130] local-lis/les=47/48 n=6 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [1] r=0 lpr=58 pi=[47,58)/1 crt=56'1130 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:21:14 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 59 pg[9.a( v 56'1130 lc 0'0 (0'0,56'1130] local-lis/les=47/48 n=6 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [1] r=0 lpr=58 pi=[47,58)/1 crt=56'1130 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:21:14 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 59 pg[9.14( v 56'1130 lc 0'0 (0'0,56'1130] local-lis/les=47/48 n=5 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [1] r=0 lpr=58 pi=[47,58)/1 crt=56'1130 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:21:14 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 59 pg[9.d( v 56'1130 lc 0'0 (0'0,56'1130] local-lis/les=47/48 n=6 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [1] r=0 lpr=58 pi=[47,58)/1 crt=56'1130 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:21:14 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 59 pg[9.c( v 56'1130 lc 0'0 (0'0,56'1130] local-lis/les=47/48 n=6 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [1] r=0 lpr=58 pi=[47,58)/1 crt=56'1130 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:21:14 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 59 pg[9.f( v 56'1130 lc 0'0 (0'0,56'1130] local-lis/les=47/48 n=6 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [1] r=0 lpr=58 pi=[47,58)/1 crt=56'1130 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:21:14 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 59 pg[9.7( v 56'1130 lc 0'0 (0'0,56'1130] local-lis/les=47/48 n=6 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [1] r=0 lpr=58 pi=[47,58)/1 crt=56'1130 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:21:14 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 59 pg[9.6( v 56'1130 lc 0'0 (0'0,56'1130] local-lis/les=47/48 n=6 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [1] r=0 lpr=58 pi=[47,58)/1 crt=56'1130 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:21:14 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 59 pg[9.1( v 56'1130 lc 0'0 (0'0,56'1130] local-lis/les=47/48 n=6 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [1] r=0 lpr=58 pi=[47,58)/1 crt=56'1130 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:21:14 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 59 pg[9.15( v 56'1130 lc 0'0 (0'0,56'1130] local-lis/les=47/48 n=5 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [1] r=0 lpr=58 pi=[47,58)/1 crt=56'1130 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:21:14 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 59 pg[9.5( v 56'1130 lc 0'0 (0'0,56'1130] local-lis/les=47/48 n=6 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [1] r=0 lpr=58 pi=[47,58)/1 crt=56'1130 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:21:14 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 59 pg[9.4( v 56'1130 lc 0'0 (0'0,56'1130] local-lis/les=47/48 n=6 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [1] r=0 lpr=58 pi=[47,58)/1 crt=56'1130 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:21:14 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 59 pg[9.1a( v 56'1130 lc 0'0 (0'0,56'1130] local-lis/les=47/48 n=5 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [1] r=0 lpr=58 pi=[47,58)/1 crt=56'1130 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:21:14 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 59 pg[9.1c( v 56'1130 lc 0'0 (0'0,56'1130] local-lis/les=47/48 n=5 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [1] r=0 lpr=58 pi=[47,58)/1 crt=56'1130 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:21:14 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 59 pg[9.1f( v 56'1130 lc 0'0 (0'0,56'1130] local-lis/les=47/48 n=5 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [1] r=0 lpr=58 pi=[47,58)/1 crt=56'1130 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:21:14 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 59 pg[9.1d( v 56'1130 lc 0'0 (0'0,56'1130] local-lis/les=47/48 n=5 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [1] r=0 lpr=58 pi=[47,58)/1 crt=56'1130 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:21:14 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 59 pg[9.13( v 56'1130 lc 0'0 (0'0,56'1130] local-lis/les=47/48 n=5 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [1] r=0 lpr=58 pi=[47,58)/1 crt=56'1130 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:21:14 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 59 pg[9.11( v 56'1130 lc 0'0 (0'0,56'1130] local-lis/les=47/48 n=6 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [1] r=0 lpr=58 pi=[47,58)/1 crt=56'1130 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:21:14 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 59 pg[9.16( v 56'1130 lc 0'0 (0'0,56'1130] local-lis/les=47/48 n=5 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [1] r=0 lpr=58 pi=[47,58)/1 crt=56'1130 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:21:14 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 59 pg[9.19( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=5 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [1] r=0 lpr=58 pi=[47,58)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:21:14 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:21:14 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 59 pg[9.0( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=5 ec=47/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [1] r=0 lpr=58 pi=[47,58)/1 crt=56'1130 lcod 56'1129 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:21:14 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 59 pg[9.3( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=6 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [1] r=0 lpr=58 pi=[47,58)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:21:14 compute-0 ceph-mgr[74948]: [progress INFO root] update: starting ev 69c26498-5953-4c32-b667-91684388cce7 (Updating ingress.rgw.default deployment (+4 -> 4))
Nov 29 06:21:14 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 59 pg[9.17( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=5 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [1] r=0 lpr=58 pi=[47,58)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:21:14 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 59 pg[9.b( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=6 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [1] r=0 lpr=58 pi=[47,58)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:21:14 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 59 pg[9.12( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=5 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [1] r=0 lpr=58 pi=[47,58)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:21:14 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 59 pg[9.10( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=6 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [1] r=0 lpr=58 pi=[47,58)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:21:14 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 59 pg[9.e( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=6 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [1] r=0 lpr=58 pi=[47,58)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:21:14 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 59 pg[9.18( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=5 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [1] r=0 lpr=58 pi=[47,58)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:21:14 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 59 pg[9.1e( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=5 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [1] r=0 lpr=58 pi=[47,58)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:21:14 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 59 pg[9.9( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=6 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [1] r=0 lpr=58 pi=[47,58)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:21:14 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 59 pg[9.8( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=6 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [1] r=0 lpr=58 pi=[47,58)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:21:14 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 59 pg[9.2( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=6 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [1] r=0 lpr=58 pi=[47,58)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:21:14 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 59 pg[9.14( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=5 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [1] r=0 lpr=58 pi=[47,58)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:21:14 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 59 pg[9.d( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=6 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [1] r=0 lpr=58 pi=[47,58)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:21:14 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 59 pg[9.1b( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=5 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [1] r=0 lpr=58 pi=[47,58)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:21:14 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 59 pg[9.a( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=6 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [1] r=0 lpr=58 pi=[47,58)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:21:14 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 59 pg[9.c( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=6 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [1] r=0 lpr=58 pi=[47,58)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:21:14 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 59 pg[9.1( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=6 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [1] r=0 lpr=58 pi=[47,58)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:21:14 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 59 pg[9.f( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=6 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [1] r=0 lpr=58 pi=[47,58)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:21:14 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 59 pg[9.7( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=6 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [1] r=0 lpr=58 pi=[47,58)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:21:14 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 59 pg[9.5( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=6 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [1] r=0 lpr=58 pi=[47,58)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:21:14 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 59 pg[9.6( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=6 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [1] r=0 lpr=58 pi=[47,58)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:21:14 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 59 pg[9.15( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=5 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [1] r=0 lpr=58 pi=[47,58)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:21:14 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 59 pg[9.1a( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=5 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [1] r=0 lpr=58 pi=[47,58)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:21:14 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 59 pg[9.1f( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=5 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [1] r=0 lpr=58 pi=[47,58)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:21:14 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 59 pg[9.1c( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=5 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [1] r=0 lpr=58 pi=[47,58)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:21:14 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 59 pg[9.4( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=6 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [1] r=0 lpr=58 pi=[47,58)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:21:14 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 59 pg[9.1d( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=5 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [1] r=0 lpr=58 pi=[47,58)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:21:14 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 59 pg[9.11( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=6 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [1] r=0 lpr=58 pi=[47,58)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:21:14 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 59 pg[9.13( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=5 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [1] r=0 lpr=58 pi=[47,58)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:21:14 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.rgw.default/monitor_password}] v 0) v1
Nov 29 06:21:14 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 59 pg[9.16( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=5 ec=58/47 lis/c=47/47 les/c/f=48/48/0 sis=58) [1] r=0 lpr=58 pi=[47,58)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:21:14 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:21:14 compute-0 ceph-mgr[74948]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.rgw.default.compute-0.zzbnoj on compute-0
Nov 29 06:21:14 compute-0 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.rgw.default.compute-0.zzbnoj on compute-0
Nov 29 06:21:14 compute-0 sudo[94830]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:21:14 compute-0 sudo[94830]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:21:14 compute-0 sudo[94830]: pam_unix(sudo:session): session closed for user root
Nov 29 06:21:14 compute-0 sudo[94855]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:21:14 compute-0 sudo[94855]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:21:14 compute-0 sudo[94855]: pam_unix(sudo:session): session closed for user root
Nov 29 06:21:15 compute-0 sudo[94880]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:21:15 compute-0 sudo[94880]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:21:15 compute-0 sudo[94880]: pam_unix(sudo:session): session closed for user root
Nov 29 06:21:15 compute-0 sudo[94905]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/haproxy:2.3 --timeout 895 _orch deploy --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047
Nov 29 06:21:15 compute-0 sudo[94905]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:21:15 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 7.6 scrub starts
Nov 29 06:21:15 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 7.6 scrub ok
Nov 29 06:21:15 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e59 do_prune osdmap full prune enabled
Nov 29 06:21:16 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v185: 274 pgs: 31 unknown, 243 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 107 KiB/s rd, 0 B/s wr, 196 op/s
Nov 29 06:21:16 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 29 06:21:16 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 06:21:16 compute-0 ceph-mon[74654]: 3.11 scrub starts
Nov 29 06:21:16 compute-0 ceph-mon[74654]: 3.11 scrub ok
Nov 29 06:21:16 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:21:16 compute-0 ceph-mon[74654]: pgmap v183: 274 pgs: 1 peering, 93 unknown, 180 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 107 KiB/s rd, 0 B/s wr, 198 op/s
Nov 29 06:21:16 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 06:21:16 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:21:16 compute-0 ceph-mon[74654]: osdmap e59: 3 total, 3 up, 3 in
Nov 29 06:21:16 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:21:16 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:21:16 compute-0 ceph-mon[74654]: Deploying daemon haproxy.rgw.default.compute-0.zzbnoj on compute-0
Nov 29 06:21:16 compute-0 ceph-mon[74654]: 3.8 scrub starts
Nov 29 06:21:16 compute-0 ceph-mon[74654]: 3.8 scrub ok
Nov 29 06:21:16 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 06:21:16 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e60 e60: 3 total, 3 up, 3 in
Nov 29 06:21:16 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e60: 3 total, 3 up, 3 in
Nov 29 06:21:16 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 60 pg[11.0( v 54'2 (0'0,54'2] local-lis/les=51/52 n=2 ec=51/51 lis/c=51/51 les/c/f=52/52/0 sis=60 pruub=12.091829300s) [1] r=0 lpr=60 pi=[51,60)/1 crt=54'2 lcod 54'1 mlcod 54'1 active pruub 173.492294312s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:21:16 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 60 pg[11.0( v 54'2 lc 0'0 (0'0,54'2] local-lis/les=51/52 n=0 ec=51/51 lis/c=51/51 les/c/f=52/52/0 sis=60 pruub=12.091829300s) [1] r=0 lpr=60 pi=[51,60)/1 crt=54'2 lcod 54'1 mlcod 0'0 unknown pruub 173.492294312s@ mbc={}] state<Start>: transitioning to Primary
Nov 29 06:21:17 compute-0 ceph-mon[74654]: 7.6 scrub starts
Nov 29 06:21:17 compute-0 ceph-mon[74654]: 7.6 scrub ok
Nov 29 06:21:17 compute-0 ceph-mon[74654]: pgmap v185: 274 pgs: 31 unknown, 243 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 107 KiB/s rd, 0 B/s wr, 196 op/s
Nov 29 06:21:17 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 06:21:17 compute-0 ceph-mon[74654]: 3.0 scrub starts
Nov 29 06:21:17 compute-0 ceph-mon[74654]: 3.0 scrub ok
Nov 29 06:21:17 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 06:21:17 compute-0 ceph-mon[74654]: osdmap e60: 3 total, 3 up, 3 in
Nov 29 06:21:17 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).mds e9 new map
Nov 29 06:21:17 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).mds e9 print_map
                                           e9
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        9
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-11-29T06:19:35.588785+0000
                                           modified        2025-11-29T06:21:17.214295+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        0
                                           up        {0=24145}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           [mds.cephfs.compute-2.gxdwyy{0:24145} state up:active seq 6 join_fscid=1 addr [v2:192.168.122.102:6804/1811763433,v1:192.168.122.102:6805/1811763433] compat {c=[1],r=[1],i=[7ff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.jzycnf{-1:14409} state up:standby seq 4 join_fscid=1 addr [v2:192.168.122.100:6806/3521074432,v1:192.168.122.100:6807/3521074432] compat {c=[1],r=[1],i=[7ff]}]
                                           [mds.cephfs.compute-1.vlqnad{-1:24131} state up:standby seq 1 addr [v2:192.168.122.101:6804/3552238207,v1:192.168.122.101:6805/3552238207] compat {c=[1],r=[1],i=[7ff]}]
Nov 29 06:21:17 compute-0 ceph-mds[94810]: mds.cephfs.compute-0.jzycnf Updating MDS map to version 9 from mon.0
Nov 29 06:21:17 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6806/3521074432,v1:192.168.122.100:6807/3521074432] up:standby
Nov 29 06:21:17 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.102:6804/1811763433,v1:192.168.122.102:6805/1811763433] up:active
Nov 29 06:21:17 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.gxdwyy=up:active} 2 up:standby
Nov 29 06:21:17 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e60 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 06:21:17 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e60 do_prune osdmap full prune enabled
Nov 29 06:21:17 compute-0 sshd-session[95005]: Invalid user alma from 138.124.186.225 port 38132
Nov 29 06:21:17 compute-0 sshd-session[95005]: Received disconnect from 138.124.186.225 port 38132:11: Bye Bye [preauth]
Nov 29 06:21:17 compute-0 sshd-session[95005]: Disconnected from invalid user alma 138.124.186.225 port 38132 [preauth]
Nov 29 06:21:18 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v187: 305 pgs: 31 unknown, 274 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:21:18 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 06:21:18 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e61 e61: 3 total, 3 up, 3 in
Nov 29 06:21:18 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e61: 3 total, 3 up, 3 in
Nov 29 06:21:18 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 61 pg[11.14( v 54'2 lc 0'0 (0'0,54'2] local-lis/les=51/52 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=54'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:21:18 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 61 pg[11.13( v 54'2 lc 0'0 (0'0,54'2] local-lis/les=51/52 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=54'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:21:18 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 61 pg[11.11( v 54'2 lc 0'0 (0'0,54'2] local-lis/les=51/52 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=54'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:21:18 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 61 pg[11.1f( v 54'2 lc 0'0 (0'0,54'2] local-lis/les=51/52 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=54'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:21:18 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 61 pg[11.1e( v 54'2 lc 0'0 (0'0,54'2] local-lis/les=51/52 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=54'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:21:18 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 61 pg[11.1d( v 54'2 lc 0'0 (0'0,54'2] local-lis/les=51/52 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=54'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:21:18 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 61 pg[11.6( v 54'2 lc 0'0 (0'0,54'2] local-lis/les=51/52 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=54'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:21:18 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 61 pg[11.7( v 54'2 lc 0'0 (0'0,54'2] local-lis/les=51/52 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=54'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:21:18 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 61 pg[11.3( v 54'2 lc 0'0 (0'0,54'2] local-lis/les=51/52 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=54'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:21:18 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 61 pg[11.4( v 54'2 lc 0'0 (0'0,54'2] local-lis/les=51/52 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=54'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:21:18 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 61 pg[11.18( v 54'2 lc 0'0 (0'0,54'2] local-lis/les=51/52 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=54'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:21:18 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 61 pg[11.17( v 54'2 lc 0'0 (0'0,54'2] local-lis/les=51/52 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=54'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:21:18 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 61 pg[11.5( v 54'2 lc 0'0 (0'0,54'2] local-lis/les=51/52 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=54'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:21:18 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 61 pg[11.d( v 54'2 lc 0'0 (0'0,54'2] local-lis/les=51/52 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=54'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:21:18 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 61 pg[11.e( v 54'2 lc 0'0 (0'0,54'2] local-lis/les=51/52 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=54'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:21:18 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 61 pg[11.f( v 54'2 lc 0'0 (0'0,54'2] local-lis/les=51/52 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=54'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:21:18 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 61 pg[11.8( v 54'2 lc 0'0 (0'0,54'2] local-lis/les=51/52 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=54'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:21:18 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 61 pg[11.16( v 54'2 lc 0'0 (0'0,54'2] local-lis/les=51/52 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=54'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:21:18 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 61 pg[11.19( v 54'2 lc 0'0 (0'0,54'2] local-lis/les=51/52 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=54'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:21:18 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 61 pg[11.1a( v 54'2 lc 0'0 (0'0,54'2] local-lis/les=51/52 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=54'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:21:18 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 61 pg[11.1c( v 54'2 lc 0'0 (0'0,54'2] local-lis/les=51/52 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=54'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:21:18 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 61 pg[11.12( v 54'2 lc 0'0 (0'0,54'2] local-lis/les=51/52 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=54'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:21:18 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 61 pg[11.10( v 54'2 lc 0'0 (0'0,54'2] local-lis/les=51/52 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=54'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:21:18 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 61 pg[11.b( v 54'2 lc 0'0 (0'0,54'2] local-lis/les=51/52 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=54'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:21:18 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 61 pg[11.9( v 54'2 lc 0'0 (0'0,54'2] local-lis/les=51/52 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=54'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:21:18 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 61 pg[11.a( v 54'2 lc 0'0 (0'0,54'2] local-lis/les=51/52 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=54'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:21:18 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 61 pg[11.c( v 54'2 lc 0'0 (0'0,54'2] local-lis/les=51/52 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=54'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:21:18 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 61 pg[11.1( v 54'2 (0'0,54'2] local-lis/les=51/52 n=1 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=54'2 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:21:18 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 61 pg[11.2( v 54'2 lc 0'0 (0'0,54'2] local-lis/les=51/52 n=1 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=54'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:21:18 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 61 pg[11.15( v 54'2 lc 0'0 (0'0,54'2] local-lis/les=51/52 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=54'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:21:18 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 61 pg[11.1b( v 54'2 lc 0'0 (0'0,54'2] local-lis/les=51/52 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=54'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:21:18 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 7.13 scrub starts
Nov 29 06:21:18 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 61 pg[11.14( v 54'2 (0'0,54'2] local-lis/les=60/61 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=54'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:21:18 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 61 pg[11.11( v 54'2 (0'0,54'2] local-lis/les=60/61 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=54'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:21:18 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 61 pg[11.13( v 54'2 (0'0,54'2] local-lis/les=60/61 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=54'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:21:18 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 7.13 scrub ok
Nov 29 06:21:18 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 61 pg[11.6( v 54'2 (0'0,54'2] local-lis/les=60/61 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=54'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:21:18 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 61 pg[11.1f( v 54'2 (0'0,54'2] local-lis/les=60/61 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=54'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:21:18 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 61 pg[11.7( v 54'2 (0'0,54'2] local-lis/les=60/61 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=54'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:21:18 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 61 pg[11.1d( v 54'2 (0'0,54'2] local-lis/les=60/61 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=54'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:21:18 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 61 pg[11.4( v 54'2 (0'0,54'2] local-lis/les=60/61 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=54'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:21:18 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 61 pg[11.18( v 54'2 (0'0,54'2] local-lis/les=60/61 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=54'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:21:18 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 61 pg[11.3( v 54'2 (0'0,54'2] local-lis/les=60/61 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=54'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:21:18 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 61 pg[11.1e( v 54'2 (0'0,54'2] local-lis/les=60/61 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=54'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:21:18 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 61 pg[11.17( v 54'2 (0'0,54'2] local-lis/les=60/61 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=54'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:21:18 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 61 pg[11.5( v 54'2 (0'0,54'2] local-lis/les=60/61 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=54'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:21:18 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 61 pg[11.d( v 54'2 (0'0,54'2] local-lis/les=60/61 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=54'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:21:18 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 61 pg[11.f( v 54'2 (0'0,54'2] local-lis/les=60/61 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=54'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:21:18 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 61 pg[11.e( v 54'2 (0'0,54'2] local-lis/les=60/61 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=54'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:21:18 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 61 pg[11.8( v 54'2 (0'0,54'2] local-lis/les=60/61 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=54'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:21:18 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 61 pg[11.19( v 54'2 (0'0,54'2] local-lis/les=60/61 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=54'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:21:18 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 61 pg[11.16( v 54'2 (0'0,54'2] local-lis/les=60/61 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=54'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:21:18 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 61 pg[11.1a( v 54'2 (0'0,54'2] local-lis/les=60/61 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=54'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:21:18 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 61 pg[11.1c( v 54'2 (0'0,54'2] local-lis/les=60/61 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=54'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:21:18 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 61 pg[11.0( v 54'2 (0'0,54'2] local-lis/les=60/61 n=0 ec=51/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=54'2 lcod 54'1 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:21:18 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 61 pg[11.10( v 54'2 (0'0,54'2] local-lis/les=60/61 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=54'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:21:18 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 61 pg[11.b( v 54'2 (0'0,54'2] local-lis/les=60/61 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=54'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:21:18 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 61 pg[11.9( v 54'2 (0'0,54'2] local-lis/les=60/61 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=54'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:21:18 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 61 pg[11.1( v 54'2 (0'0,54'2] local-lis/les=60/61 n=1 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=54'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:21:18 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 61 pg[11.12( v 54'2 (0'0,54'2] local-lis/les=60/61 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=54'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:21:18 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 61 pg[11.a( v 54'2 (0'0,54'2] local-lis/les=60/61 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=54'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:21:18 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 61 pg[11.2( v 54'2 (0'0,54'2] local-lis/les=60/61 n=1 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=54'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:21:18 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 61 pg[11.15( v 54'2 (0'0,54'2] local-lis/les=60/61 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=54'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:21:18 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 61 pg[11.c( v 54'2 (0'0,54'2] local-lis/les=60/61 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=54'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:21:18 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 61 pg[11.1b( v 54'2 (0'0,54'2] local-lis/les=60/61 n=0 ec=60/51 lis/c=51/51 les/c/f=52/52/0 sis=60) [1] r=0 lpr=60 pi=[51,60)/1 crt=54'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:21:18 compute-0 ceph-mon[74654]: 6.8 scrub starts
Nov 29 06:21:18 compute-0 ceph-mon[74654]: 6.8 scrub ok
Nov 29 06:21:18 compute-0 ceph-mon[74654]: mds.? [v2:192.168.122.100:6806/3521074432,v1:192.168.122.100:6807/3521074432] up:standby
Nov 29 06:21:18 compute-0 ceph-mon[74654]: mds.? [v2:192.168.122.102:6804/1811763433,v1:192.168.122.102:6805/1811763433] up:active
Nov 29 06:21:18 compute-0 ceph-mon[74654]: fsmap cephfs:1 {0=cephfs.compute-2.gxdwyy=up:active} 2 up:standby
Nov 29 06:21:18 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 06:21:18 compute-0 ceph-mon[74654]: osdmap e61: 3 total, 3 up, 3 in
Nov 29 06:21:18 compute-0 ceph-mgr[74948]: [progress WARNING root] Starting Global Recovery Event,31 pgs not in active + clean state
Nov 29 06:21:19 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 7.3 scrub starts
Nov 29 06:21:19 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 7.3 scrub ok
Nov 29 06:21:20 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v189: 305 pgs: 31 unknown, 274 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:21:20 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 7.18 scrub starts
Nov 29 06:21:20 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 7.18 scrub ok
Nov 29 06:21:21 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).mds e10 new map
Nov 29 06:21:21 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).mds e10 print_map
                                           e10
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        9
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-11-29T06:19:35.588785+0000
                                           modified        2025-11-29T06:21:17.214295+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        0
                                           up        {0=24145}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           [mds.cephfs.compute-2.gxdwyy{0:24145} state up:active seq 6 join_fscid=1 addr [v2:192.168.122.102:6804/1811763433,v1:192.168.122.102:6805/1811763433] compat {c=[1],r=[1],i=[7ff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.jzycnf{-1:14409} state up:standby seq 4 join_fscid=1 addr [v2:192.168.122.100:6806/3521074432,v1:192.168.122.100:6807/3521074432] compat {c=[1],r=[1],i=[7ff]}]
                                           [mds.cephfs.compute-1.vlqnad{-1:24131} state up:standby seq 3 join_fscid=1 addr [v2:192.168.122.101:6804/3552238207,v1:192.168.122.101:6805/3552238207] compat {c=[1],r=[1],i=[7ff]}]
Nov 29 06:21:21 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.101:6804/3552238207,v1:192.168.122.101:6805/3552238207] up:standby
Nov 29 06:21:21 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.gxdwyy=up:active} 2 up:standby
Nov 29 06:21:21 compute-0 ceph-mon[74654]: pgmap v187: 305 pgs: 31 unknown, 274 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:21:21 compute-0 ceph-mon[74654]: 7.13 scrub starts
Nov 29 06:21:21 compute-0 ceph-mon[74654]: 7.13 scrub ok
Nov 29 06:21:21 compute-0 ceph-mon[74654]: 7.3 scrub starts
Nov 29 06:21:21 compute-0 ceph-mon[74654]: 7.3 scrub ok
Nov 29 06:21:22 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v190: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:21:22 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 29 06:21:22 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 06:21:22 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 29 06:21:22 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 06:21:22 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} v 0) v1
Nov 29 06:21:22 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Nov 29 06:21:22 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 29 06:21:22 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 06:21:22 compute-0 podman[94970]: 2025-11-29 06:21:22.412008935 +0000 UTC m=+6.933384470 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Nov 29 06:21:22 compute-0 podman[94970]: 2025-11-29 06:21:22.508248496 +0000 UTC m=+7.029624051 container create 66a47136de98396c2fb5fc1883965b7f66af552a37fbd0aad9544714ada98925 (image=quay.io/ceph/haproxy:2.3, name=practical_tharp)
Nov 29 06:21:22 compute-0 systemd[1]: Started libpod-conmon-66a47136de98396c2fb5fc1883965b7f66af552a37fbd0aad9544714ada98925.scope.
Nov 29 06:21:22 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:21:22 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e61 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 06:21:22 compute-0 podman[94970]: 2025-11-29 06:21:22.808276284 +0000 UTC m=+7.329651809 container init 66a47136de98396c2fb5fc1883965b7f66af552a37fbd0aad9544714ada98925 (image=quay.io/ceph/haproxy:2.3, name=practical_tharp)
Nov 29 06:21:22 compute-0 podman[94970]: 2025-11-29 06:21:22.814909615 +0000 UTC m=+7.336285120 container start 66a47136de98396c2fb5fc1883965b7f66af552a37fbd0aad9544714ada98925 (image=quay.io/ceph/haproxy:2.3, name=practical_tharp)
Nov 29 06:21:22 compute-0 practical_tharp[95087]: 0 0
Nov 29 06:21:22 compute-0 systemd[1]: libpod-66a47136de98396c2fb5fc1883965b7f66af552a37fbd0aad9544714ada98925.scope: Deactivated successfully.
Nov 29 06:21:23 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e61 do_prune osdmap full prune enabled
Nov 29 06:21:23 compute-0 podman[94970]: 2025-11-29 06:21:23.112766741 +0000 UTC m=+7.634142256 container attach 66a47136de98396c2fb5fc1883965b7f66af552a37fbd0aad9544714ada98925 (image=quay.io/ceph/haproxy:2.3, name=practical_tharp)
Nov 29 06:21:23 compute-0 podman[94970]: 2025-11-29 06:21:23.113336847 +0000 UTC m=+7.634712362 container died 66a47136de98396c2fb5fc1883965b7f66af552a37fbd0aad9544714ada98925 (image=quay.io/ceph/haproxy:2.3, name=practical_tharp)
Nov 29 06:21:23 compute-0 ceph-mon[74654]: pgmap v189: 305 pgs: 31 unknown, 274 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:21:23 compute-0 ceph-mon[74654]: 6.d scrub starts
Nov 29 06:21:23 compute-0 ceph-mon[74654]: 6.d scrub ok
Nov 29 06:21:23 compute-0 ceph-mon[74654]: 5.0 deep-scrub starts
Nov 29 06:21:23 compute-0 ceph-mon[74654]: 5.0 deep-scrub ok
Nov 29 06:21:23 compute-0 ceph-mon[74654]: 7.18 scrub starts
Nov 29 06:21:23 compute-0 ceph-mon[74654]: 7.18 scrub ok
Nov 29 06:21:23 compute-0 ceph-mon[74654]: mds.? [v2:192.168.122.101:6804/3552238207,v1:192.168.122.101:6805/3552238207] up:standby
Nov 29 06:21:23 compute-0 ceph-mon[74654]: fsmap cephfs:1 {0=cephfs.compute-2.gxdwyy=up:active} 2 up:standby
Nov 29 06:21:23 compute-0 ceph-mon[74654]: pgmap v190: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:21:23 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 06:21:23 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 06:21:23 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Nov 29 06:21:23 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 06:21:23 compute-0 ceph-mon[74654]: 6.a scrub starts
Nov 29 06:21:23 compute-0 ceph-mon[74654]: 6.a scrub ok
Nov 29 06:21:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-b9e4a4dd1e8a1078d14faa469a4e1812dd7eff194bd2830655f303ec22147d3e-merged.mount: Deactivated successfully.
Nov 29 06:21:23 compute-0 podman[94970]: 2025-11-29 06:21:23.354194602 +0000 UTC m=+7.875570107 container remove 66a47136de98396c2fb5fc1883965b7f66af552a37fbd0aad9544714ada98925 (image=quay.io/ceph/haproxy:2.3, name=practical_tharp)
Nov 29 06:21:23 compute-0 systemd[1]: libpod-conmon-66a47136de98396c2fb5fc1883965b7f66af552a37fbd0aad9544714ada98925.scope: Deactivated successfully.
Nov 29 06:21:23 compute-0 systemd[1]: Reloading.
Nov 29 06:21:23 compute-0 systemd-rc-local-generator[95137]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 06:21:23 compute-0 systemd-sysv-generator[95140]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 06:21:23 compute-0 systemd[1]: Reloading.
Nov 29 06:21:23 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 7.4 scrub starts
Nov 29 06:21:23 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 7.4 scrub ok
Nov 29 06:21:23 compute-0 systemd-rc-local-generator[95175]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 06:21:23 compute-0 systemd-sysv-generator[95178]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 06:21:23 compute-0 ceph-mgr[74948]: [progress INFO root] Completed event e513d348-4646-4037-8f31-89368481c0d1 (Global Recovery Event) in 5 seconds
Nov 29 06:21:24 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v191: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:21:24 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 29 06:21:24 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 06:21:24 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 29 06:21:24 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 06:21:24 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} v 0) v1
Nov 29 06:21:24 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Nov 29 06:21:24 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 29 06:21:24 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 06:21:24 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 06:21:24 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 06:21:24 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Nov 29 06:21:24 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 06:21:24 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e62 e62: 3 total, 3 up, 3 in
Nov 29 06:21:24 compute-0 systemd[1]: Starting Ceph haproxy.rgw.default.compute-0.zzbnoj for 336ec58c-893b-528f-a0c1-6ed1196bc047...
Nov 29 06:21:24 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e62: 3 total, 3 up, 3 in
Nov 29 06:21:24 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[10.14( empty local-lis/les=0/0 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:21:24 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[10.8( empty local-lis/les=0/0 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:21:24 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[10.13( empty local-lis/les=0/0 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:21:24 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[10.1b( empty local-lis/les=0/0 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:21:24 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[10.18( empty local-lis/les=0/0 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:21:24 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[10.5( empty local-lis/les=0/0 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:21:24 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[10.2( empty local-lis/les=0/0 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:21:24 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[10.19( empty local-lis/les=0/0 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:21:24 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[10.15( empty local-lis/les=0/0 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:21:24 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[11.14( v 54'2 (0'0,54'2] local-lis/les=60/61 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=10.422176361s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=54'2 lcod 0'0 mlcod 0'0 active pruub 179.078231812s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:21:24 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[8.17( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62 pruub=12.370802879s) [0] r=-1 lpr=62 pi=[56,62)/1 crt=46'4 lcod 0'0 mlcod 0'0 active pruub 181.026885986s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:21:24 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[11.14( v 54'2 (0'0,54'2] local-lis/les=60/61 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=10.422135353s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=54'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.078231812s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 06:21:24 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[8.17( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62 pruub=12.370767593s) [0] r=-1 lpr=62 pi=[56,62)/1 crt=46'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 181.026885986s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 06:21:24 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[11.13( v 54'2 (0'0,54'2] local-lis/les=60/61 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=10.422014236s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=54'2 lcod 0'0 mlcod 0'0 active pruub 179.078262329s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:21:24 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[8.10( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62 pruub=12.370596886s) [0] r=-1 lpr=62 pi=[56,62)/1 crt=46'4 lcod 0'0 mlcod 0'0 active pruub 181.026870728s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:21:24 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[11.13( v 54'2 (0'0,54'2] local-lis/les=60/61 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=10.421978951s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=54'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.078262329s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 06:21:24 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[8.10( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62 pruub=12.370497704s) [0] r=-1 lpr=62 pi=[56,62)/1 crt=46'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 181.026870728s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 06:21:24 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[8.12( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62 pruub=12.370359421s) [0] r=-1 lpr=62 pi=[56,62)/1 crt=46'4 lcod 0'0 mlcod 0'0 active pruub 181.026809692s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:21:24 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[8.12( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62 pruub=12.370335579s) [0] r=-1 lpr=62 pi=[56,62)/1 crt=46'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 181.026809692s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 06:21:24 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[8.1c( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62 pruub=12.370198250s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=46'4 lcod 0'0 mlcod 0'0 active pruub 181.026779175s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:21:24 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[8.1c( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62 pruub=12.370179176s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=46'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 181.026779175s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 06:21:24 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[11.1e( v 54'2 (0'0,54'2] local-lis/les=60/61 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=10.439199448s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=54'2 lcod 0'0 mlcod 0'0 active pruub 179.095840454s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:21:24 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[11.1e( v 54'2 (0'0,54'2] local-lis/les=60/61 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=10.439179420s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=54'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.095840454s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 06:21:24 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[8.1b( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62 pruub=12.369860649s) [0] r=-1 lpr=62 pi=[56,62)/1 crt=46'4 lcod 0'0 mlcod 0'0 active pruub 181.026718140s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:21:24 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[11.1d( v 54'2 (0'0,54'2] local-lis/les=60/61 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=10.438863754s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=54'2 lcod 0'0 mlcod 0'0 active pruub 179.095718384s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:21:24 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[8.1b( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62 pruub=12.369842529s) [0] r=-1 lpr=62 pi=[56,62)/1 crt=46'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 181.026718140s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 06:21:24 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[11.1d( v 54'2 (0'0,54'2] local-lis/les=60/61 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=10.438841820s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=54'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.095718384s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 06:21:24 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[8.5( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62 pruub=12.369688988s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=46'4 lcod 0'0 mlcod 0'0 active pruub 181.026687622s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:21:24 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[8.5( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62 pruub=12.369671822s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=46'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 181.026687622s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 06:21:24 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[11.7( v 54'2 (0'0,54'2] local-lis/les=60/61 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=10.438580513s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=54'2 lcod 0'0 mlcod 0'0 active pruub 179.095611572s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:21:24 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[11.7( v 54'2 (0'0,54'2] local-lis/les=60/61 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=10.438559532s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=54'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.095611572s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 06:21:24 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[8.4( v 46'4 (0'0,46'4] local-lis/les=56/57 n=1 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62 pruub=12.369668961s) [0] r=-1 lpr=62 pi=[56,62)/1 crt=46'4 lcod 0'0 mlcod 0'0 active pruub 181.026794434s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:21:24 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[8.4( v 46'4 (0'0,46'4] local-lis/les=56/57 n=1 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62 pruub=12.369653702s) [0] r=-1 lpr=62 pi=[56,62)/1 crt=46'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 181.026794434s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 06:21:24 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[11.3( v 54'2 (0'0,54'2] local-lis/les=60/61 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=10.438618660s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=54'2 lcod 0'0 mlcod 0'0 active pruub 179.095840454s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:21:24 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[11.3( v 54'2 (0'0,54'2] local-lis/les=60/61 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=10.438606262s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=54'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.095840454s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 06:21:24 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[11.4( v 54'2 (0'0,54'2] local-lis/les=60/61 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=10.438386917s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=54'2 lcod 0'0 mlcod 0'0 active pruub 179.095748901s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:21:24 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[11.4( v 54'2 (0'0,54'2] local-lis/les=60/61 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=10.438371658s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=54'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.095748901s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 06:21:24 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[11.17( v 54'2 (0'0,54'2] local-lis/les=60/61 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=10.438152313s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=54'2 lcod 0'0 mlcod 0'0 active pruub 179.095855713s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:21:24 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[11.17( v 54'2 (0'0,54'2] local-lis/les=60/61 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=10.438129425s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=54'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.095855713s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 06:21:24 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[11.e( v 54'2 (0'0,54'2] local-lis/les=60/61 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=10.438166618s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=54'2 lcod 0'0 mlcod 0'0 active pruub 179.095993042s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:21:24 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[11.e( v 54'2 (0'0,54'2] local-lis/les=60/61 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=10.438115120s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=54'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.095993042s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 06:21:24 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[8.d( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62 pruub=12.368515015s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=46'4 lcod 0'0 mlcod 0'0 active pruub 181.026443481s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:21:24 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[8.d( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62 pruub=12.368495941s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=46'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 181.026443481s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 06:21:24 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[11.f( v 54'2 (0'0,54'2] local-lis/les=60/61 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=10.437891960s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=54'2 lcod 0'0 mlcod 0'0 active pruub 179.095962524s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:21:24 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[11.f( v 54'2 (0'0,54'2] local-lis/les=60/61 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=10.437859535s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=54'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.095962524s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 06:21:24 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[8.c( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62 pruub=12.368103027s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=46'4 lcod 0'0 mlcod 0'0 active pruub 181.026443481s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:21:24 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[11.8( v 54'2 (0'0,54'2] local-lis/les=60/61 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=10.437676430s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=54'2 lcod 0'0 mlcod 0'0 active pruub 179.096008301s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:21:24 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[8.c( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62 pruub=12.368072510s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=46'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 181.026443481s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 06:21:24 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[11.8( v 54'2 (0'0,54'2] local-lis/les=60/61 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=10.437631607s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=54'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.096008301s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 06:21:24 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[8.b( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62 pruub=12.367816925s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=46'4 lcod 0'0 mlcod 0'0 active pruub 181.026412964s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:21:24 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[8.b( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62 pruub=12.367795944s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=46'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 181.026412964s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 06:21:24 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[11.16( v 54'2 (0'0,54'2] local-lis/les=60/61 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=10.437351227s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=54'2 lcod 0'0 mlcod 0'0 active pruub 179.096038818s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:21:24 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[11.16( v 54'2 (0'0,54'2] local-lis/les=60/61 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=10.437318802s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=54'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.096038818s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 06:21:24 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[8.15( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62 pruub=12.367474556s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=46'4 lcod 0'0 mlcod 0'0 active pruub 181.026382446s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:21:24 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[8.15( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62 pruub=12.367448807s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=46'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 181.026382446s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 06:21:24 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[8.14( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62 pruub=12.367616653s) [0] r=-1 lpr=62 pi=[56,62)/1 crt=46'4 lcod 0'0 mlcod 0'0 active pruub 181.026565552s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:21:24 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[11.5( v 54'2 (0'0,54'2] local-lis/les=60/61 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=10.436944008s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=54'2 lcod 0'0 mlcod 0'0 active pruub 179.095932007s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:21:24 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[8.6( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62 pruub=12.367565155s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=46'4 lcod 0'0 mlcod 0'0 active pruub 181.026565552s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:21:24 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[8.8( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62 pruub=12.367348671s) [0] r=-1 lpr=62 pi=[56,62)/1 crt=46'4 lcod 0'0 mlcod 0'0 active pruub 181.026367188s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:21:24 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[11.5( v 54'2 (0'0,54'2] local-lis/les=60/61 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=10.436909676s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=54'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.095932007s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 06:21:24 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[8.6( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62 pruub=12.367545128s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=46'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 181.026565552s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 06:21:24 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[8.8( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62 pruub=12.367314339s) [0] r=-1 lpr=62 pi=[56,62)/1 crt=46'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 181.026367188s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 06:21:24 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[8.14( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62 pruub=12.367407799s) [0] r=-1 lpr=62 pi=[56,62)/1 crt=46'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 181.026565552s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 06:21:24 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[11.19( v 54'2 (0'0,54'2] local-lis/les=60/61 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=10.436842918s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=54'2 lcod 0'0 mlcod 0'0 active pruub 179.096023560s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:21:24 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[11.19( v 54'2 (0'0,54'2] local-lis/les=60/61 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=10.436819077s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=54'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.096023560s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 06:21:24 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[11.1a( v 54'2 (0'0,54'2] local-lis/les=60/61 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=10.436736107s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=54'2 lcod 0'0 mlcod 0'0 active pruub 179.096054077s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:21:24 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[11.1c( v 54'2 (0'0,54'2] local-lis/les=60/61 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=10.436726570s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=54'2 lcod 0'0 mlcod 0'0 active pruub 179.096069336s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:21:24 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[8.19( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62 pruub=12.366975784s) [0] r=-1 lpr=62 pi=[56,62)/1 crt=46'4 lcod 0'0 mlcod 0'0 active pruub 181.026336670s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:21:24 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[11.1c( v 54'2 (0'0,54'2] local-lis/les=60/61 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=10.436676979s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=54'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.096069336s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 06:21:24 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[8.19( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62 pruub=12.366915703s) [0] r=-1 lpr=62 pi=[56,62)/1 crt=46'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 181.026336670s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 06:21:24 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[11.1a( v 54'2 (0'0,54'2] local-lis/les=60/61 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=10.436693192s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=54'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.096054077s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 06:21:24 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[8.3( v 46'4 (0'0,46'4] local-lis/les=56/57 n=1 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62 pruub=12.366551399s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=46'4 lcod 0'0 mlcod 0'0 active pruub 181.026214600s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:21:24 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[8.1f( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62 pruub=12.366712570s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=46'4 lcod 0'0 mlcod 0'0 active pruub 181.026412964s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:21:24 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[8.3( v 46'4 (0'0,46'4] local-lis/les=56/57 n=1 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62 pruub=12.366530418s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=46'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 181.026214600s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 06:21:24 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[8.1f( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62 pruub=12.366686821s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=46'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 181.026412964s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 06:21:24 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[11.12( v 54'2 (0'0,54'2] local-lis/les=60/61 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=10.436373711s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=54'2 lcod 0'0 mlcod 0'0 active pruub 179.096145630s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:21:24 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[11.12( v 54'2 (0'0,54'2] local-lis/les=60/61 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=10.436349869s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=54'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.096145630s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 06:21:24 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[8.11( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62 pruub=12.366664886s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=46'4 lcod 0'0 mlcod 0'0 active pruub 181.026489258s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:21:24 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[8.11( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62 pruub=12.366647720s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=46'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 181.026489258s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 06:21:24 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[8.a( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62 pruub=12.366159439s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=46'4 lcod 0'0 mlcod 0'0 active pruub 181.026153564s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:21:24 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[8.a( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62 pruub=12.366135597s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=46'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 181.026153564s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 06:21:24 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[8.9( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62 pruub=12.366021156s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=46'4 lcod 0'0 mlcod 0'0 active pruub 181.026092529s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:21:24 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[8.9( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62 pruub=12.365999222s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=46'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 181.026092529s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 06:21:24 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[11.a( v 54'2 (0'0,54'2] local-lis/les=60/61 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=10.436002731s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=54'2 lcod 0'0 mlcod 0'0 active pruub 179.096176147s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:21:24 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[11.a( v 54'2 (0'0,54'2] local-lis/les=60/61 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=10.435943604s) [2] r=-1 lpr=62 pi=[60,62)/1 crt=54'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.096176147s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 06:21:24 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[11.1( v 54'2 (0'0,54'2] local-lis/les=60/61 n=1 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=10.435586929s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=54'2 lcod 0'0 mlcod 0'0 active pruub 179.096145630s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:21:24 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[8.f( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62 pruub=12.365346909s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=46'4 lcod 0'0 mlcod 0'0 active pruub 181.025955200s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:21:24 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[8.2( v 46'4 (0'0,46'4] local-lis/les=56/57 n=1 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62 pruub=12.365461349s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=46'4 lcod 0'0 mlcod 0'0 active pruub 181.026153564s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:21:24 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[8.f( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62 pruub=12.365249634s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=46'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 181.025955200s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 06:21:24 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[8.2( v 46'4 (0'0,46'4] local-lis/les=56/57 n=1 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62 pruub=12.365444183s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=46'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 181.026153564s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 06:21:24 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[11.1( v 54'2 (0'0,54'2] local-lis/les=60/61 n=1 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=10.435533524s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=54'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.096145630s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 06:21:24 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[8.16( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62 pruub=12.365745544s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=46'4 lcod 0'0 mlcod 0'0 active pruub 181.026702881s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:21:24 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[8.16( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62 pruub=12.365699768s) [2] r=-1 lpr=62 pi=[56,62)/1 crt=46'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 181.026702881s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 06:21:24 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[11.1b( v 54'2 (0'0,54'2] local-lis/les=60/61 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=10.434968948s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=54'2 lcod 0'0 mlcod 0'0 active pruub 179.096237183s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:21:24 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[11.1b( v 54'2 (0'0,54'2] local-lis/les=60/61 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62 pruub=10.434947014s) [0] r=-1 lpr=62 pi=[60,62)/1 crt=54'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.096237183s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 06:21:24 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[8.18( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62 pruub=12.364606857s) [0] r=-1 lpr=62 pi=[56,62)/1 crt=46'4 lcod 0'0 mlcod 0'0 active pruub 181.025955200s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:21:24 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 62 pg[8.18( v 46'4 (0'0,46'4] local-lis/les=56/57 n=0 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62 pruub=12.364582062s) [0] r=-1 lpr=62 pi=[56,62)/1 crt=46'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 181.025955200s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 06:21:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:21:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:21:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:21:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:21:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:21:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:21:24 compute-0 podman[95233]: 2025-11-29 06:21:24.433067513 +0000 UTC m=+0.100831413 container create f5b8edcc79df1f136246f04a71d5e10f6a214865dd4162430c1b6090267d988f (image=quay.io/ceph/haproxy:2.3, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-haproxy-rgw-default-compute-0-zzbnoj)
Nov 29 06:21:24 compute-0 podman[95233]: 2025-11-29 06:21:24.36937771 +0000 UTC m=+0.037141600 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Nov 29 06:21:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdcc739ed24461c8c577ea73c0480ca465a39cf95d639f924efb4e28e32a1b1d/merged/var/lib/haproxy supports timestamps until 2038 (0x7fffffff)
Nov 29 06:21:24 compute-0 podman[95233]: 2025-11-29 06:21:24.608421141 +0000 UTC m=+0.276185091 container init f5b8edcc79df1f136246f04a71d5e10f6a214865dd4162430c1b6090267d988f (image=quay.io/ceph/haproxy:2.3, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-haproxy-rgw-default-compute-0-zzbnoj)
Nov 29 06:21:24 compute-0 podman[95233]: 2025-11-29 06:21:24.615399002 +0000 UTC m=+0.283162862 container start f5b8edcc79df1f136246f04a71d5e10f6a214865dd4162430c1b6090267d988f (image=quay.io/ceph/haproxy:2.3, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-haproxy-rgw-default-compute-0-zzbnoj)
Nov 29 06:21:24 compute-0 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-haproxy-rgw-default-compute-0-zzbnoj[95248]: [NOTICE] 332/062124 (2) : New worker #1 (4) forked
Nov 29 06:21:24 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:21:24 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.002000058s ======
Nov 29 06:21:24 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:21:24.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000058s
Nov 29 06:21:24 compute-0 bash[95233]: f5b8edcc79df1f136246f04a71d5e10f6a214865dd4162430c1b6090267d988f
Nov 29 06:21:24 compute-0 systemd[1]: Started Ceph haproxy.rgw.default.compute-0.zzbnoj for 336ec58c-893b-528f-a0c1-6ed1196bc047.
Nov 29 06:21:24 compute-0 sudo[94905]: pam_unix(sudo:session): session closed for user root
Nov 29 06:21:24 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 06:21:25 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e62 do_prune osdmap full prune enabled
Nov 29 06:21:25 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 06:21:25 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 06:21:25 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Nov 29 06:21:25 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 06:21:25 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 06:21:25 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 06:21:25 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Nov 29 06:21:25 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 06:21:25 compute-0 ceph-mon[74654]: osdmap e62: 3 total, 3 up, 3 in
Nov 29 06:21:25 compute-0 sshd-session[95262]: Accepted publickey for zuul from 192.168.122.30 port 60398 ssh2: ECDSA SHA256:q0RMlXdalxA6snNWza7TmIndlwLWLLpO+sXhiGKqO/I
Nov 29 06:21:25 compute-0 systemd-logind[797]: New session 34 of user zuul.
Nov 29 06:21:25 compute-0 systemd[1]: Started Session 34 of User zuul.
Nov 29 06:21:25 compute-0 sshd-session[95262]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 06:21:25 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 06:21:25 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 06:21:25 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Nov 29 06:21:25 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 06:21:25 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e63 e63: 3 total, 3 up, 3 in
Nov 29 06:21:25 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:21:25 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e63: 3 total, 3 up, 3 in
Nov 29 06:21:25 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 06:21:25 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 63 pg[10.15( v 59'99 lc 54'78 (0'0,59'99] local-lis/les=62/63 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=59'99 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:21:25 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 63 pg[10.5( v 54'96 (0'0,54'96] local-lis/les=62/63 n=1 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=54'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:21:25 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 63 pg[10.2( v 54'96 (0'0,54'96] local-lis/les=62/63 n=1 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=54'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:21:25 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 63 pg[10.19( v 54'96 (0'0,54'96] local-lis/les=62/63 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=54'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:21:25 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 63 pg[10.18( v 54'96 (0'0,54'96] local-lis/les=62/63 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=54'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:21:25 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 63 pg[10.8( v 54'96 (0'0,54'96] local-lis/les=62/63 n=1 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=54'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:21:25 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 63 pg[10.1b( v 54'96 (0'0,54'96] local-lis/les=62/63 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=54'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:21:25 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 63 pg[10.13( v 54'96 (0'0,54'96] local-lis/les=62/63 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=54'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:21:25 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 63 pg[10.14( v 59'99 lc 54'86 (0'0,59'99] local-lis/les=62/63 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=62) [1] r=0 lpr=62 pi=[58,62)/1 crt=59'99 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:21:25 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:21:25 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0) v1
Nov 29 06:21:26 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v194: 305 pgs: 9 peering, 296 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:21:26 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:21:26 compute-0 ceph-mgr[74948]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.rgw.default.compute-2.lpqgfx on compute-2
Nov 29 06:21:26 compute-0 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.rgw.default.compute-2.lpqgfx on compute-2
Nov 29 06:21:26 compute-0 sshd-session[95265]: Invalid user admin123 from 79.116.35.29 port 49620
Nov 29 06:21:26 compute-0 sshd-session[95265]: Received disconnect from 79.116.35.29 port 49620:11: Bye Bye [preauth]
Nov 29 06:21:26 compute-0 sshd-session[95265]: Disconnected from invalid user admin123 79.116.35.29 port 49620 [preauth]
Nov 29 06:21:26 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:21:26 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:21:26 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:21:26.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:21:27 compute-0 python3.9[95417]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 06:21:28 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v195: 305 pgs: 9 peering, 296 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 109 B/s, 0 objects/s recovering
Nov 29 06:21:28 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e63 do_prune osdmap full prune enabled
Nov 29 06:21:28 compute-0 sudo[95638]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-izmvglfitfmdoaeicqbzwpmoafjptsqs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397288.1540556-61-80890145935306/AnsiballZ_command.py'
Nov 29 06:21:28 compute-0 sudo[95638]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:21:28 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 7.e scrub starts
Nov 29 06:21:28 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 7.e scrub ok
Nov 29 06:21:28 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:21:28 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:21:28 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:21:28.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:21:28 compute-0 ceph-mon[74654]: 7.4 scrub starts
Nov 29 06:21:28 compute-0 ceph-mon[74654]: 7.4 scrub ok
Nov 29 06:21:28 compute-0 ceph-mon[74654]: pgmap v191: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:21:28 compute-0 ceph-mon[74654]: 5.d deep-scrub starts
Nov 29 06:21:28 compute-0 ceph-mon[74654]: 5.d deep-scrub ok
Nov 29 06:21:28 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 06:21:28 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 06:21:28 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Nov 29 06:21:28 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 06:21:28 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:21:28 compute-0 ceph-mon[74654]: osdmap e63: 3 total, 3 up, 3 in
Nov 29 06:21:28 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:21:28 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:21:28 compute-0 ceph-mon[74654]: 5.b scrub starts
Nov 29 06:21:28 compute-0 ceph-mon[74654]: 5.b scrub ok
Nov 29 06:21:28 compute-0 python3.9[95640]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                            pushd /var/tmp
                                            curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                            pushd repo-setup-main
                                            python3 -m venv ./venv
                                            PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                            ./venv/bin/repo-setup current-podified -b antelope
                                            popd
                                            rm -rf repo-setup-main
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:21:29 compute-0 ceph-mgr[74948]: [progress INFO root] Writing back 19 completed events
Nov 29 06:21:29 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 29 06:21:29 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e64 e64: 3 total, 3 up, 3 in
Nov 29 06:21:29 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e64: 3 total, 3 up, 3 in
Nov 29 06:21:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 06:21:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 06:21:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 06:21:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 06:21:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 06:21:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 06:21:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 06:21:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 06:21:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 06:21:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 06:21:29 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:21:29 compute-0 ceph-mgr[74948]: [progress WARNING root] Starting Global Recovery Event,40 pgs not in active + clean state
Nov 29 06:21:29 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 7.9 scrub starts
Nov 29 06:21:29 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 7.9 scrub ok
Nov 29 06:21:29 compute-0 ceph-mon[74654]: pgmap v194: 305 pgs: 9 peering, 296 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:21:29 compute-0 ceph-mon[74654]: Deploying daemon haproxy.rgw.default.compute-2.lpqgfx on compute-2
Nov 29 06:21:29 compute-0 ceph-mon[74654]: pgmap v195: 305 pgs: 9 peering, 296 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 109 B/s, 0 objects/s recovering
Nov 29 06:21:29 compute-0 ceph-mon[74654]: 7.e scrub starts
Nov 29 06:21:29 compute-0 ceph-mon[74654]: 7.e scrub ok
Nov 29 06:21:29 compute-0 ceph-mon[74654]: 6.2 scrub starts
Nov 29 06:21:29 compute-0 ceph-mon[74654]: 6.2 scrub ok
Nov 29 06:21:29 compute-0 ceph-mon[74654]: osdmap e64: 3 total, 3 up, 3 in
Nov 29 06:21:29 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:21:30 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v197: 305 pgs: 40 peering, 265 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 145 B/s, 0 objects/s recovering
Nov 29 06:21:30 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:21:30 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:21:30 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:21:30.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:21:31 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 7.10 scrub starts
Nov 29 06:21:31 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 7.10 scrub ok
Nov 29 06:21:31 compute-0 ceph-mon[74654]: 7.9 scrub starts
Nov 29 06:21:31 compute-0 ceph-mon[74654]: 7.9 scrub ok
Nov 29 06:21:31 compute-0 ceph-mon[74654]: pgmap v197: 305 pgs: 40 peering, 265 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 145 B/s, 0 objects/s recovering
Nov 29 06:21:32 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v198: 305 pgs: 31 peering, 274 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 111 B/s, 0 objects/s recovering
Nov 29 06:21:32 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:21:32 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:21:32 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:21:32.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:21:32 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 7.b scrub starts
Nov 29 06:21:32 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 7.b scrub ok
Nov 29 06:21:32 compute-0 ceph-mon[74654]: 7.10 scrub starts
Nov 29 06:21:32 compute-0 ceph-mon[74654]: 7.10 scrub ok
Nov 29 06:21:32 compute-0 ceph-mon[74654]: pgmap v198: 305 pgs: 31 peering, 274 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 111 B/s, 0 objects/s recovering
Nov 29 06:21:32 compute-0 ceph-mon[74654]: 4.1b scrub starts
Nov 29 06:21:32 compute-0 ceph-mon[74654]: 4.1b scrub ok
Nov 29 06:21:32 compute-0 ceph-mon[74654]: 5.8 scrub starts
Nov 29 06:21:32 compute-0 ceph-mon[74654]: 5.8 scrub ok
Nov 29 06:21:33 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 06:21:34 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v199: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 118 B/s, 0 objects/s recovering
Nov 29 06:21:34 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} v 0) v1
Nov 29 06:21:34 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Nov 29 06:21:34 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e64 do_prune osdmap full prune enabled
Nov 29 06:21:34 compute-0 ceph-mgr[74948]: [progress INFO root] Completed event 26d17dde-91e9-46c1-94a3-4bff28b62117 (Global Recovery Event) in 5 seconds
Nov 29 06:21:34 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:21:34 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:21:34 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:21:34.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:21:34 compute-0 ceph-mon[74654]: 7.b scrub starts
Nov 29 06:21:34 compute-0 ceph-mon[74654]: 7.b scrub ok
Nov 29 06:21:34 compute-0 ceph-mon[74654]: 4.1a scrub starts
Nov 29 06:21:34 compute-0 ceph-mon[74654]: 4.1a scrub ok
Nov 29 06:21:35 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Nov 29 06:21:35 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e65 e65: 3 total, 3 up, 3 in
Nov 29 06:21:35 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e65: 3 total, 3 up, 3 in
Nov 29 06:21:35 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:21:35 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:21:35 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:21:35.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:21:35 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 06:21:36 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v201: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 13 B/s, 0 objects/s recovering
Nov 29 06:21:36 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} v 0) v1
Nov 29 06:21:36 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Nov 29 06:21:36 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e65 do_prune osdmap full prune enabled
Nov 29 06:21:36 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:21:36 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:21:36 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:21:36.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:21:37 compute-0 sshd-session[95664]: Invalid user odoo15 from 104.208.108.166 port 52258
Nov 29 06:21:37 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:21:37 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 06:21:37 compute-0 sshd-session[95664]: Received disconnect from 104.208.108.166 port 52258:11: Bye Bye [preauth]
Nov 29 06:21:37 compute-0 sshd-session[95664]: Disconnected from invalid user odoo15 104.208.108.166 port 52258 [preauth]
Nov 29 06:21:37 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:21:37 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:21:37 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:21:37.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:21:38 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v202: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 11 B/s, 0 objects/s recovering
Nov 29 06:21:38 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} v 0) v1
Nov 29 06:21:38 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Nov 29 06:21:38 compute-0 sudo[95638]: pam_unix(sudo:session): session closed for user root
Nov 29 06:21:38 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:21:38 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:21:38 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:21:38.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:21:39 compute-0 ceph-mgr[74948]: [progress INFO root] Writing back 20 completed events
Nov 29 06:21:39 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:21:39 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:21:39 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:21:39.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:21:39 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 29 06:21:39 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 7.f scrub starts
Nov 29 06:21:39 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 7.f scrub ok
Nov 29 06:21:39 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Nov 29 06:21:39 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e66 e66: 3 total, 3 up, 3 in
Nov 29 06:21:40 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v203: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 10 B/s, 0 objects/s recovering
Nov 29 06:21:40 compute-0 ceph-mon[74654]: pgmap v199: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 118 B/s, 0 objects/s recovering
Nov 29 06:21:40 compute-0 ceph-mon[74654]: 3.15 scrub starts
Nov 29 06:21:40 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Nov 29 06:21:40 compute-0 ceph-mon[74654]: 3.15 scrub ok
Nov 29 06:21:40 compute-0 ceph-mon[74654]: 6.5 scrub starts
Nov 29 06:21:40 compute-0 ceph-mon[74654]: 6.5 scrub ok
Nov 29 06:21:40 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Nov 29 06:21:40 compute-0 ceph-mon[74654]: osdmap e65: 3 total, 3 up, 3 in
Nov 29 06:21:40 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e66: 3 total, 3 up, 3 in
Nov 29 06:21:40 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:21:40 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:21:40 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:21:40.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:21:40 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} v 0) v1
Nov 29 06:21:40 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Nov 29 06:21:40 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e66 do_prune osdmap full prune enabled
Nov 29 06:21:41 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 66 pg[9.1f( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=5 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=66 pruub=13.616820335s) [2] r=-1 lpr=66 pi=[58,66)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active pruub 199.191085815s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:21:41 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 66 pg[9.1f( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=5 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=66 pruub=13.616754532s) [2] r=-1 lpr=66 pi=[58,66)/1 crt=56'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 199.191085815s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 06:21:41 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 66 pg[9.7( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=6 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=66 pruub=13.616343498s) [2] r=-1 lpr=66 pi=[58,66)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active pruub 199.190933228s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:21:41 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 66 pg[9.13( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=5 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=66 pruub=13.616616249s) [2] r=-1 lpr=66 pi=[58,66)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active pruub 199.191238403s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:21:41 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 66 pg[9.7( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=6 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=66 pruub=13.616241455s) [2] r=-1 lpr=66 pi=[58,66)/1 crt=56'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 199.190933228s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 06:21:41 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 66 pg[9.13( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=5 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=66 pruub=13.616504669s) [2] r=-1 lpr=66 pi=[58,66)/1 crt=56'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 199.191238403s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 06:21:41 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 66 pg[9.f( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=6 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=66 pruub=13.616054535s) [2] r=-1 lpr=66 pi=[58,66)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active pruub 199.190872192s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:21:41 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 66 pg[9.f( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=6 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=66 pruub=13.616014481s) [2] r=-1 lpr=66 pi=[58,66)/1 crt=56'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 199.190872192s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 06:21:41 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 66 pg[9.1b( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=5 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=66 pruub=13.615541458s) [2] r=-1 lpr=66 pi=[58,66)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active pruub 199.190856934s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:21:41 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 66 pg[9.1b( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=5 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=66 pruub=13.615483284s) [2] r=-1 lpr=66 pi=[58,66)/1 crt=56'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 199.190856934s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 06:21:41 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 66 pg[9.b( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=6 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=66 pruub=13.614504814s) [2] r=-1 lpr=66 pi=[58,66)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active pruub 199.190017700s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:21:41 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 66 pg[9.b( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=6 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=66 pruub=13.614473343s) [2] r=-1 lpr=66 pi=[58,66)/1 crt=56'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 199.190017700s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 06:21:41 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 66 pg[9.3( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=6 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=66 pruub=13.613969803s) [2] r=-1 lpr=66 pi=[58,66)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active pruub 199.189636230s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:21:41 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 66 pg[9.3( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=6 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=66 pruub=13.613945961s) [2] r=-1 lpr=66 pi=[58,66)/1 crt=56'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 199.189636230s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 06:21:41 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 66 pg[9.17( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=5 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=66 pruub=13.614059448s) [2] r=-1 lpr=66 pi=[58,66)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active pruub 199.189956665s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:21:41 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 66 pg[9.17( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=5 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=66 pruub=13.614003181s) [2] r=-1 lpr=66 pi=[58,66)/1 crt=56'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 199.189956665s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 06:21:41 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:21:41 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:21:41 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:21:41.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:21:41 compute-0 sshd-session[95267]: Connection closed by 192.168.122.30 port 60398
Nov 29 06:21:41 compute-0 sshd-session[95262]: pam_unix(sshd:session): session closed for user zuul
Nov 29 06:21:41 compute-0 systemd[1]: session-34.scope: Deactivated successfully.
Nov 29 06:21:41 compute-0 systemd[1]: session-34.scope: Consumed 9.395s CPU time.
Nov 29 06:21:41 compute-0 systemd-logind[797]: Session 34 logged out. Waiting for processes to exit.
Nov 29 06:21:41 compute-0 systemd-logind[797]: Removed session 34.
Nov 29 06:21:41 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 06:21:42 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v205: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:21:42 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} v 0) v1
Nov 29 06:21:42 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Nov 29 06:21:42 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:21:42 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0) v1
Nov 29 06:21:42 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:21:42 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:21:42 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:21:42.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:21:43 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:21:43 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:21:43 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:21:43.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:21:43 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 7.8 scrub starts
Nov 29 06:21:43 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 7.8 scrub ok
Nov 29 06:21:44 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v206: 305 pgs: 8 unknown, 297 active+clean; 455 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:21:44 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Nov 29 06:21:44 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Nov 29 06:21:44 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e67 e67: 3 total, 3 up, 3 in
Nov 29 06:21:44 compute-0 ceph-mon[74654]: pgmap v201: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 13 B/s, 0 objects/s recovering
Nov 29 06:21:44 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Nov 29 06:21:44 compute-0 ceph-mon[74654]: 4.e scrub starts
Nov 29 06:21:44 compute-0 ceph-mon[74654]: 4.e scrub ok
Nov 29 06:21:44 compute-0 ceph-mon[74654]: 6.3 deep-scrub starts
Nov 29 06:21:44 compute-0 ceph-mon[74654]: 6.3 deep-scrub ok
Nov 29 06:21:44 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:21:44 compute-0 ceph-mon[74654]: pgmap v202: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 11 B/s, 0 objects/s recovering
Nov 29 06:21:44 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Nov 29 06:21:44 compute-0 ceph-mon[74654]: 7.f scrub starts
Nov 29 06:21:44 compute-0 ceph-mon[74654]: 7.f scrub ok
Nov 29 06:21:44 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Nov 29 06:21:44 compute-0 ceph-mon[74654]: pgmap v203: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 10 B/s, 0 objects/s recovering
Nov 29 06:21:44 compute-0 ceph-mon[74654]: 6.7 scrub starts
Nov 29 06:21:44 compute-0 ceph-mon[74654]: 6.7 scrub ok
Nov 29 06:21:44 compute-0 ceph-mon[74654]: osdmap e66: 3 total, 3 up, 3 in
Nov 29 06:21:44 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Nov 29 06:21:44 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:21:44 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e67: 3 total, 3 up, 3 in
Nov 29 06:21:44 compute-0 ceph-mgr[74948]: [progress WARNING root] Starting Global Recovery Event,8 pgs not in active + clean state
Nov 29 06:21:44 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:21:44 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:21:44 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:21:44.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:21:45 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e67 do_prune osdmap full prune enabled
Nov 29 06:21:45 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:21:45 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:21:45 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:21:45.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:21:45 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 7.1e scrub starts
Nov 29 06:21:46 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v208: 305 pgs: 8 unknown, 297 active+clean; 455 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:21:46 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 7.1e scrub ok
Nov 29 06:21:46 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:21:46 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:21:46 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:21:46.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:21:46 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 7.1b scrub starts
Nov 29 06:21:46 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:21:47 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.rgw.default/keepalived_password}] v 0) v1
Nov 29 06:21:47 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 7.1b scrub ok
Nov 29 06:21:47 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:21:47 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:21:47 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:21:47.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:21:48 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v209: 305 pgs: 8 unknown, 297 active+clean; 455 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:21:48 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Nov 29 06:21:48 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e68 e68: 3 total, 3 up, 3 in
Nov 29 06:21:48 compute-0 ceph-mon[74654]: 5.13 scrub starts
Nov 29 06:21:48 compute-0 ceph-mon[74654]: pgmap v205: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:21:48 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Nov 29 06:21:48 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:21:48 compute-0 ceph-mon[74654]: 7.8 scrub starts
Nov 29 06:21:48 compute-0 ceph-mon[74654]: 7.8 scrub ok
Nov 29 06:21:48 compute-0 ceph-mon[74654]: pgmap v206: 305 pgs: 8 unknown, 297 active+clean; 455 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:21:48 compute-0 ceph-mon[74654]: 4.d scrub starts
Nov 29 06:21:48 compute-0 ceph-mon[74654]: 4.d scrub ok
Nov 29 06:21:48 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Nov 29 06:21:48 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Nov 29 06:21:48 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:21:48 compute-0 ceph-mon[74654]: osdmap e67: 3 total, 3 up, 3 in
Nov 29 06:21:48 compute-0 ceph-mon[74654]: 3.16 scrub starts
Nov 29 06:21:48 compute-0 ceph-mon[74654]: 3.16 scrub ok
Nov 29 06:21:48 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:21:48 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:21:48 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:21:48.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:21:49 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 68 pg[9.1b( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=5 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=68) [2]/[1] r=0 lpr=68 pi=[58,68)/1 crt=56'1130 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:21:49 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 68 pg[9.3( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=6 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=68) [2]/[1] r=0 lpr=68 pi=[58,68)/1 crt=56'1130 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:21:49 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 68 pg[9.17( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=5 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=68) [2]/[1] r=0 lpr=68 pi=[58,68)/1 crt=56'1130 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:21:49 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 68 pg[9.1b( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=5 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=68) [2]/[1] r=0 lpr=68 pi=[58,68)/1 crt=56'1130 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 06:21:49 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 68 pg[9.17( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=5 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=68) [2]/[1] r=0 lpr=68 pi=[58,68)/1 crt=56'1130 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 06:21:49 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 68 pg[9.3( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=6 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=68) [2]/[1] r=0 lpr=68 pi=[58,68)/1 crt=56'1130 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 06:21:49 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 68 pg[9.f( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=6 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=68) [2]/[1] r=0 lpr=68 pi=[58,68)/1 crt=56'1130 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:21:49 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 68 pg[9.7( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=6 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=68) [2]/[1] r=0 lpr=68 pi=[58,68)/1 crt=56'1130 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:21:49 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 68 pg[9.f( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=6 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=68) [2]/[1] r=0 lpr=68 pi=[58,68)/1 crt=56'1130 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 06:21:49 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 68 pg[9.7( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=6 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=68) [2]/[1] r=0 lpr=68 pi=[58,68)/1 crt=56'1130 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 06:21:49 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 68 pg[9.1f( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=5 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=68) [2]/[1] r=0 lpr=68 pi=[58,68)/1 crt=56'1130 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:21:49 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 68 pg[9.1f( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=5 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=68) [2]/[1] r=0 lpr=68 pi=[58,68)/1 crt=56'1130 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 06:21:49 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 68 pg[9.b( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=6 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=68) [2]/[1] r=0 lpr=68 pi=[58,68)/1 crt=56'1130 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:21:49 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 68 pg[9.13( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=5 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=68) [2]/[1] r=0 lpr=68 pi=[58,68)/1 crt=56'1130 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:21:49 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 68 pg[9.b( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=6 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=68) [2]/[1] r=0 lpr=68 pi=[58,68)/1 crt=56'1130 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 06:21:49 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 68 pg[9.13( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=5 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=68) [2]/[1] r=0 lpr=68 pi=[58,68)/1 crt=56'1130 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 06:21:49 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e68: 3 total, 3 up, 3 in
Nov 29 06:21:49 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:21:49 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:21:49 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:21:49.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:21:49 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e68 do_prune osdmap full prune enabled
Nov 29 06:21:49 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:21:49 compute-0 ceph-mgr[74948]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Nov 29 06:21:49 compute-0 ceph-mgr[74948]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Nov 29 06:21:49 compute-0 ceph-mgr[74948]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Nov 29 06:21:49 compute-0 ceph-mgr[74948]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Nov 29 06:21:49 compute-0 ceph-mgr[74948]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.rgw.default.compute-2.klqjoa on compute-2
Nov 29 06:21:49 compute-0 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.rgw.default.compute-2.klqjoa on compute-2
Nov 29 06:21:50 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v211: 305 pgs: 8 unknown, 297 active+clean; 455 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:21:50 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:21:50 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:21:50 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:21:50.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:21:50 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 7.2 scrub starts
Nov 29 06:21:50 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 7.2 scrub ok
Nov 29 06:21:51 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e69 e69: 3 total, 3 up, 3 in
Nov 29 06:21:51 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:21:51 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:21:51 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:21:51.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:21:52 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v212: 305 pgs: 8 unknown, 297 active+clean; 455 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:21:52 compute-0 ceph-mon[74654]: 5.13 scrub ok
Nov 29 06:21:52 compute-0 ceph-mon[74654]: 7.1e scrub starts
Nov 29 06:21:52 compute-0 ceph-mon[74654]: pgmap v208: 305 pgs: 8 unknown, 297 active+clean; 455 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:21:52 compute-0 ceph-mon[74654]: 7.1e scrub ok
Nov 29 06:21:52 compute-0 ceph-mon[74654]: 7.1b scrub starts
Nov 29 06:21:52 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:21:52 compute-0 ceph-mon[74654]: 5.10 scrub starts
Nov 29 06:21:52 compute-0 ceph-mon[74654]: 5.10 scrub ok
Nov 29 06:21:52 compute-0 ceph-mon[74654]: 7.1b scrub ok
Nov 29 06:21:52 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Nov 29 06:21:52 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e69: 3 total, 3 up, 3 in
Nov 29 06:21:52 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:21:52 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:21:52 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:21:52.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:21:53 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e69 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 06:21:53 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:21:53 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:21:53 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:21:53.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:21:53 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 69 pg[9.17( v 56'1130 (0'0,56'1130] local-lis/les=68/69 n=5 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=68) [2]/[1] async=[2] r=0 lpr=68 pi=[58,68)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:21:54 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v214: 305 pgs: 8 unknown, 297 active+clean; 455 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:21:54 compute-0 ceph-mgr[74948]: [balancer INFO root] Optimize plan auto_2025-11-29_06:21:54
Nov 29 06:21:54 compute-0 ceph-mgr[74948]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 06:21:54 compute-0 ceph-mgr[74948]: [balancer INFO root] Some PGs (0.026230) are unknown; try again later
Nov 29 06:21:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:21:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:21:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:21:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:21:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:21:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:21:54 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 69 pg[9.f( v 56'1130 (0'0,56'1130] local-lis/les=68/69 n=6 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=68) [2]/[1] async=[2] r=0 lpr=68 pi=[58,68)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:21:54 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 69 pg[9.1b( v 56'1130 (0'0,56'1130] local-lis/les=68/69 n=5 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=68) [2]/[1] async=[2] r=0 lpr=68 pi=[58,68)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:21:54 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 69 pg[9.b( v 56'1130 (0'0,56'1130] local-lis/les=68/69 n=6 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=68) [2]/[1] async=[2] r=0 lpr=68 pi=[58,68)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:21:54 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 69 pg[9.7( v 56'1130 (0'0,56'1130] local-lis/les=68/69 n=6 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=68) [2]/[1] async=[2] r=0 lpr=68 pi=[58,68)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:21:54 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 69 pg[9.1f( v 56'1130 (0'0,56'1130] local-lis/les=68/69 n=5 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=68) [2]/[1] async=[2] r=0 lpr=68 pi=[58,68)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:21:54 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 69 pg[9.3( v 56'1130 (0'0,56'1130] local-lis/les=68/69 n=6 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=68) [2]/[1] async=[2] r=0 lpr=68 pi=[58,68)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:21:54 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 69 pg[9.13( v 56'1130 (0'0,56'1130] local-lis/les=68/69 n=5 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=68) [2]/[1] async=[2] r=0 lpr=68 pi=[58,68)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:21:54 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e69 do_prune osdmap full prune enabled
Nov 29 06:21:54 compute-0 ceph-mon[74654]: 3.e scrub starts
Nov 29 06:21:54 compute-0 ceph-mon[74654]: 3.e scrub ok
Nov 29 06:21:54 compute-0 ceph-mon[74654]: pgmap v209: 305 pgs: 8 unknown, 297 active+clean; 455 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:21:54 compute-0 ceph-mon[74654]: 5.1a scrub starts
Nov 29 06:21:54 compute-0 ceph-mon[74654]: 5.1a scrub ok
Nov 29 06:21:54 compute-0 ceph-mon[74654]: osdmap e68: 3 total, 3 up, 3 in
Nov 29 06:21:54 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:21:54 compute-0 ceph-mon[74654]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Nov 29 06:21:54 compute-0 ceph-mon[74654]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Nov 29 06:21:54 compute-0 ceph-mon[74654]: Deploying daemon keepalived.rgw.default.compute-2.klqjoa on compute-2
Nov 29 06:21:54 compute-0 ceph-mon[74654]: pgmap v211: 305 pgs: 8 unknown, 297 active+clean; 455 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:21:54 compute-0 ceph-mon[74654]: 3.1d scrub starts
Nov 29 06:21:54 compute-0 ceph-mon[74654]: 3.1d scrub ok
Nov 29 06:21:54 compute-0 ceph-mon[74654]: 7.2 scrub starts
Nov 29 06:21:54 compute-0 ceph-mon[74654]: 7.2 scrub ok
Nov 29 06:21:54 compute-0 ceph-mon[74654]: 5.11 scrub starts
Nov 29 06:21:54 compute-0 ceph-mon[74654]: 5.11 scrub ok
Nov 29 06:21:54 compute-0 ceph-mon[74654]: pgmap v212: 305 pgs: 8 unknown, 297 active+clean; 455 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:21:54 compute-0 ceph-mon[74654]: osdmap e69: 3 total, 3 up, 3 in
Nov 29 06:21:54 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:21:54 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:21:54 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:21:54.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:21:55 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e70 e70: 3 total, 3 up, 3 in
Nov 29 06:21:55 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e70: 3 total, 3 up, 3 in
Nov 29 06:21:55 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 70 pg[9.17( v 56'1130 (0'0,56'1130] local-lis/les=68/69 n=5 ec=58/47 lis/c=68/58 les/c/f=69/59/0 sis=70 pruub=14.174674988s) [2] async=[2] r=-1 lpr=70 pi=[58,70)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active pruub 214.084030151s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:21:55 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 70 pg[9.17( v 56'1130 (0'0,56'1130] local-lis/les=68/69 n=5 ec=58/47 lis/c=68/58 les/c/f=69/59/0 sis=70 pruub=14.174575806s) [2] r=-1 lpr=70 pi=[58,70)/1 crt=56'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 214.084030151s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 06:21:55 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:21:55 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:21:55 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:21:55.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:21:55 compute-0 ceph-mon[74654]: 3.14 scrub starts
Nov 29 06:21:55 compute-0 ceph-mon[74654]: 3.14 scrub ok
Nov 29 06:21:55 compute-0 ceph-mon[74654]: pgmap v214: 305 pgs: 8 unknown, 297 active+clean; 455 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:21:55 compute-0 ceph-mon[74654]: osdmap e70: 3 total, 3 up, 3 in
Nov 29 06:21:56 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v216: 305 pgs: 6 active+remapped, 1 active+recovering+remapped, 1 unknown, 297 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 49 KiB/s rd, 984 B/s wr, 87 op/s; 6/210 objects misplaced (2.857%); 120 B/s, 4 objects/s recovering
Nov 29 06:21:56 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e70 do_prune osdmap full prune enabled
Nov 29 06:21:56 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e71 e71: 3 total, 3 up, 3 in
Nov 29 06:21:56 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:21:56 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:21:56 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:21:56.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:21:56 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e71: 3 total, 3 up, 3 in
Nov 29 06:21:56 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 71 pg[9.13( v 56'1130 (0'0,56'1130] local-lis/les=68/69 n=5 ec=58/47 lis/c=68/58 les/c/f=69/59/0 sis=71 pruub=13.574925423s) [2] async=[2] r=-1 lpr=71 pi=[58,71)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active pruub 214.867950439s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:21:56 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 71 pg[9.13( v 56'1130 (0'0,56'1130] local-lis/les=68/69 n=5 ec=58/47 lis/c=68/58 les/c/f=69/59/0 sis=71 pruub=13.574789047s) [2] r=-1 lpr=71 pi=[58,71)/1 crt=56'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 214.867950439s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 06:21:56 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 71 pg[9.1f( v 56'1130 (0'0,56'1130] local-lis/les=68/69 n=5 ec=58/47 lis/c=68/58 les/c/f=69/59/0 sis=71 pruub=13.574648857s) [2] async=[2] r=-1 lpr=71 pi=[58,71)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active pruub 214.867843628s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:21:56 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 71 pg[9.1f( v 56'1130 (0'0,56'1130] local-lis/les=68/69 n=5 ec=58/47 lis/c=68/58 les/c/f=69/59/0 sis=71 pruub=13.574518204s) [2] r=-1 lpr=71 pi=[58,71)/1 crt=56'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 214.867843628s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 06:21:56 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 71 pg[9.7( v 56'1130 (0'0,56'1130] local-lis/les=68/69 n=6 ec=58/47 lis/c=68/58 les/c/f=69/59/0 sis=71 pruub=13.573354721s) [2] async=[2] r=-1 lpr=71 pi=[58,71)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active pruub 214.867782593s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:21:56 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 71 pg[9.7( v 56'1130 (0'0,56'1130] local-lis/les=68/69 n=6 ec=58/47 lis/c=68/58 les/c/f=69/59/0 sis=71 pruub=13.573298454s) [2] r=-1 lpr=71 pi=[58,71)/1 crt=56'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 214.867782593s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 06:21:56 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 71 pg[9.f( v 56'1130 (0'0,56'1130] local-lis/les=68/69 n=6 ec=58/47 lis/c=68/58 les/c/f=69/59/0 sis=71 pruub=13.572719574s) [2] async=[2] r=-1 lpr=71 pi=[58,71)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active pruub 214.867492676s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:21:56 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 71 pg[9.f( v 56'1130 (0'0,56'1130] local-lis/les=68/69 n=6 ec=58/47 lis/c=68/58 les/c/f=69/59/0 sis=71 pruub=13.572625160s) [2] r=-1 lpr=71 pi=[58,71)/1 crt=56'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 214.867492676s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 06:21:56 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 71 pg[9.1b( v 56'1130 (0'0,56'1130] local-lis/les=68/69 n=5 ec=58/47 lis/c=68/58 les/c/f=69/59/0 sis=71 pruub=13.571574211s) [2] async=[2] r=-1 lpr=71 pi=[58,71)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active pruub 214.867523193s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:21:56 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 71 pg[9.1b( v 56'1130 (0'0,56'1130] local-lis/les=68/69 n=5 ec=58/47 lis/c=68/58 les/c/f=69/59/0 sis=71 pruub=13.571413994s) [2] r=-1 lpr=71 pi=[58,71)/1 crt=56'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 214.867523193s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 06:21:56 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 71 pg[9.b( v 56'1130 (0'0,56'1130] local-lis/les=68/69 n=6 ec=58/47 lis/c=68/58 les/c/f=69/59/0 sis=71 pruub=13.571501732s) [2] async=[2] r=-1 lpr=71 pi=[58,71)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active pruub 214.867752075s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:21:56 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 71 pg[9.3( v 56'1130 (0'0,56'1130] local-lis/les=68/69 n=6 ec=58/47 lis/c=68/58 les/c/f=69/59/0 sis=71 pruub=13.571502686s) [2] async=[2] r=-1 lpr=71 pi=[58,71)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active pruub 214.867919922s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:21:56 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 71 pg[9.b( v 56'1130 (0'0,56'1130] local-lis/les=68/69 n=6 ec=58/47 lis/c=68/58 les/c/f=69/59/0 sis=71 pruub=13.571456909s) [2] r=-1 lpr=71 pi=[58,71)/1 crt=56'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 214.867752075s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 06:21:56 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 71 pg[9.3( v 56'1130 (0'0,56'1130] local-lis/les=68/69 n=6 ec=58/47 lis/c=68/58 les/c/f=69/59/0 sis=71 pruub=13.571374893s) [2] r=-1 lpr=71 pi=[58,71)/1 crt=56'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 214.867919922s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 06:21:57 compute-0 sshd-session[95700]: Invalid user smart from 31.6.212.12 port 41458
Nov 29 06:21:57 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:21:57 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:21:57 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:21:57.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:21:57 compute-0 sshd-session[95700]: Received disconnect from 31.6.212.12 port 41458:11: Bye Bye [preauth]
Nov 29 06:21:57 compute-0 sshd-session[95700]: Disconnected from invalid user smart 31.6.212.12 port 41458 [preauth]
Nov 29 06:21:57 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e71 do_prune osdmap full prune enabled
Nov 29 06:21:57 compute-0 sshd-session[95702]: Accepted publickey for zuul from 192.168.122.30 port 52984 ssh2: ECDSA SHA256:q0RMlXdalxA6snNWza7TmIndlwLWLLpO+sXhiGKqO/I
Nov 29 06:21:57 compute-0 systemd-logind[797]: New session 35 of user zuul.
Nov 29 06:21:57 compute-0 systemd[1]: Started Session 35 of User zuul.
Nov 29 06:21:57 compute-0 sshd-session[95702]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 06:21:58 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v218: 305 pgs: 6 active+remapped, 1 active+recovering+remapped, 1 unknown, 297 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 59 KiB/s rd, 1.2 KiB/s wr, 106 op/s; 6/210 objects misplaced (2.857%); 146 B/s, 4 objects/s recovering
Nov 29 06:21:58 compute-0 ceph-mon[74654]: 5.1c scrub starts
Nov 29 06:21:58 compute-0 ceph-mon[74654]: 5.1c scrub ok
Nov 29 06:21:58 compute-0 ceph-mon[74654]: pgmap v216: 305 pgs: 6 active+remapped, 1 active+recovering+remapped, 1 unknown, 297 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 49 KiB/s rd, 984 B/s wr, 87 op/s; 6/210 objects misplaced (2.857%); 120 B/s, 4 objects/s recovering
Nov 29 06:21:58 compute-0 ceph-mon[74654]: osdmap e71: 3 total, 3 up, 3 in
Nov 29 06:21:58 compute-0 python3.9[95855]: ansible-ansible.legacy.ping Invoked with data=pong
Nov 29 06:21:58 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:21:58 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:21:58 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:21:58.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:21:58 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 3.12 scrub starts
Nov 29 06:21:58 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 3.12 scrub ok
Nov 29 06:21:58 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e72 e72: 3 total, 3 up, 3 in
Nov 29 06:21:59 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e72: 3 total, 3 up, 3 in
Nov 29 06:21:59 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:21:59 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:21:59 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:21:59.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:21:59 compute-0 python3.9[96029]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 06:21:59 compute-0 ceph-mon[74654]: 3.1c scrub starts
Nov 29 06:21:59 compute-0 ceph-mon[74654]: pgmap v218: 305 pgs: 6 active+remapped, 1 active+recovering+remapped, 1 unknown, 297 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 59 KiB/s rd, 1.2 KiB/s wr, 106 op/s; 6/210 objects misplaced (2.857%); 146 B/s, 4 objects/s recovering
Nov 29 06:21:59 compute-0 ceph-mon[74654]: 3.1c scrub ok
Nov 29 06:21:59 compute-0 ceph-mon[74654]: 3.1b scrub starts
Nov 29 06:21:59 compute-0 ceph-mon[74654]: 3.1b scrub ok
Nov 29 06:21:59 compute-0 ceph-mon[74654]: osdmap e72: 3 total, 3 up, 3 in
Nov 29 06:22:00 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v220: 305 pgs: 7 peering, 298 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 67 KiB/s rd, 1.3 KiB/s wr, 121 op/s; 0 B/s, 0 objects/s recovering
Nov 29 06:22:00 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:22:00 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:22:00 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:22:00.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:22:00 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 3.17 scrub starts
Nov 29 06:22:00 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 3.17 scrub ok
Nov 29 06:22:00 compute-0 sudo[96183]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-omwrubjlztwgmhdfnwtyklpnmrazdcan ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397320.4832292-98-268451725239498/AnsiballZ_command.py'
Nov 29 06:22:00 compute-0 sudo[96183]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:22:01 compute-0 python3.9[96185]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:22:01 compute-0 sudo[96183]: pam_unix(sudo:session): session closed for user root
Nov 29 06:22:01 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:22:01 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:22:01 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:22:01.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:22:01 compute-0 ceph-mon[74654]: 3.12 scrub starts
Nov 29 06:22:01 compute-0 ceph-mon[74654]: 3.12 scrub ok
Nov 29 06:22:01 compute-0 ceph-mon[74654]: 5.7 scrub starts
Nov 29 06:22:01 compute-0 ceph-mon[74654]: 5.7 scrub ok
Nov 29 06:22:01 compute-0 ceph-mon[74654]: 3.13 scrub starts
Nov 29 06:22:01 compute-0 ceph-mon[74654]: 3.13 scrub ok
Nov 29 06:22:01 compute-0 ceph-mon[74654]: pgmap v220: 305 pgs: 7 peering, 298 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 67 KiB/s rd, 1.3 KiB/s wr, 121 op/s; 0 B/s, 0 objects/s recovering
Nov 29 06:22:01 compute-0 ceph-mon[74654]: 4.19 scrub starts
Nov 29 06:22:01 compute-0 ceph-mon[74654]: 4.19 scrub ok
Nov 29 06:22:01 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 3.18 scrub starts
Nov 29 06:22:01 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 3.18 scrub ok
Nov 29 06:22:02 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v221: 305 pgs: 7 peering, 298 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 60 KiB/s rd, 1.2 KiB/s wr, 108 op/s; 0 B/s, 0 objects/s recovering
Nov 29 06:22:02 compute-0 sudo[96336]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mnzvvzgzghggghrthegbvqmlnsyvttgz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397321.6201022-134-113713518374424/AnsiballZ_stat.py'
Nov 29 06:22:02 compute-0 sudo[96336]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:22:02 compute-0 python3.9[96338]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 06:22:02 compute-0 sudo[96336]: pam_unix(sudo:session): session closed for user root
Nov 29 06:22:02 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:22:02 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:22:02 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:22:02.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:22:02 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 3.1 scrub starts
Nov 29 06:22:02 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 3.1 scrub ok
Nov 29 06:22:03 compute-0 ceph-mon[74654]: 3.17 scrub starts
Nov 29 06:22:03 compute-0 ceph-mon[74654]: 3.17 scrub ok
Nov 29 06:22:03 compute-0 ceph-mon[74654]: 5.1b scrub starts
Nov 29 06:22:03 compute-0 ceph-mon[74654]: 5.1b scrub ok
Nov 29 06:22:03 compute-0 ceph-mon[74654]: 3.18 scrub starts
Nov 29 06:22:03 compute-0 ceph-mon[74654]: 3.18 scrub ok
Nov 29 06:22:03 compute-0 ceph-mon[74654]: pgmap v221: 305 pgs: 7 peering, 298 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 60 KiB/s rd, 1.2 KiB/s wr, 108 op/s; 0 B/s, 0 objects/s recovering
Nov 29 06:22:03 compute-0 sudo[96490]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gcjryerpctosyvmcvfmzuyahwsasytyh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397322.7009325-167-110699243387119/AnsiballZ_file.py'
Nov 29 06:22:03 compute-0 sudo[96490]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:22:03 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e72 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 06:22:03 compute-0 python3.9[96492]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 06:22:03 compute-0 sudo[96490]: pam_unix(sudo:session): session closed for user root
Nov 29 06:22:03 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:22:03 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:22:03 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:22:03.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:22:03 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 5.14 scrub starts
Nov 29 06:22:03 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 5.14 scrub ok
Nov 29 06:22:04 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v222: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 6.2 KiB/s rd, 127 B/s wr, 11 op/s; 41 B/s, 1 objects/s recovering
Nov 29 06:22:04 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} v 0) v1
Nov 29 06:22:04 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Nov 29 06:22:04 compute-0 sudo[96642]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hsfpqkmoxefosgyaovcsuuuystdfxqyl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397324.0046208-194-49553851872672/AnsiballZ_file.py'
Nov 29 06:22:04 compute-0 sudo[96642]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:22:04 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e72 do_prune osdmap full prune enabled
Nov 29 06:22:04 compute-0 python3.9[96644]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 06:22:04 compute-0 sudo[96642]: pam_unix(sudo:session): session closed for user root
Nov 29 06:22:04 compute-0 ceph-mgr[74948]: [progress INFO root] Completed event 34d76df6-32e5-4f0c-9055-8e03a8da6814 (Global Recovery Event) in 20 seconds
Nov 29 06:22:04 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:22:04 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:22:04 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:22:04.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:22:05 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:22:05 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:22:05 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:22:05.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:22:05 compute-0 ceph-mon[74654]: 3.1 scrub starts
Nov 29 06:22:05 compute-0 ceph-mon[74654]: 3.1 scrub ok
Nov 29 06:22:05 compute-0 ceph-mon[74654]: 5.f scrub starts
Nov 29 06:22:05 compute-0 ceph-mon[74654]: 5.f scrub ok
Nov 29 06:22:05 compute-0 ceph-mon[74654]: 7.14 scrub starts
Nov 29 06:22:05 compute-0 ceph-mon[74654]: 7.14 scrub ok
Nov 29 06:22:05 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 5.17 scrub starts
Nov 29 06:22:05 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 5.17 scrub ok
Nov 29 06:22:06 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v223: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 5.4 KiB/s rd, 110 B/s wr, 9 op/s; 35 B/s, 0 objects/s recovering
Nov 29 06:22:06 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} v 0) v1
Nov 29 06:22:06 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Nov 29 06:22:06 compute-0 python3.9[96794]: ansible-ansible.builtin.service_facts Invoked
Nov 29 06:22:06 compute-0 network[96811]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 29 06:22:06 compute-0 network[96812]: 'network-scripts' will be removed from distribution in near future.
Nov 29 06:22:06 compute-0 network[96813]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 29 06:22:06 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Nov 29 06:22:06 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e73 e73: 3 total, 3 up, 3 in
Nov 29 06:22:06 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e73: 3 total, 3 up, 3 in
Nov 29 06:22:06 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 73 pg[9.1d( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=5 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=73 pruub=12.420503616s) [2] r=-1 lpr=73 pi=[58,73)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active pruub 223.191528320s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:22:06 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 73 pg[9.5( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=6 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=73 pruub=12.419773102s) [2] r=-1 lpr=73 pi=[58,73)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active pruub 223.191268921s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:22:06 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 73 pg[9.15( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=5 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=73 pruub=12.419677734s) [2] r=-1 lpr=73 pi=[58,73)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active pruub 223.191482544s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:22:06 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 73 pg[9.15( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=5 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=73 pruub=12.419614792s) [2] r=-1 lpr=73 pi=[58,73)/1 crt=56'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 223.191482544s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 06:22:06 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 73 pg[9.d( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=6 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=73 pruub=12.419064522s) [2] r=-1 lpr=73 pi=[58,73)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active pruub 223.191131592s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:22:06 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 73 pg[9.d( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=6 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=73 pruub=12.419006348s) [2] r=-1 lpr=73 pi=[58,73)/1 crt=56'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 223.191131592s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 06:22:06 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 73 pg[9.1d( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=5 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=73 pruub=12.419088364s) [2] r=-1 lpr=73 pi=[58,73)/1 crt=56'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 223.191528320s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 06:22:06 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 73 pg[9.5( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=6 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=73 pruub=12.418711662s) [2] r=-1 lpr=73 pi=[58,73)/1 crt=56'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 223.191268921s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 06:22:06 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:22:06 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:22:06 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:22:06.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:22:06 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 06:22:06 compute-0 ceph-mon[74654]: 5.14 scrub starts
Nov 29 06:22:06 compute-0 ceph-mon[74654]: 5.14 scrub ok
Nov 29 06:22:06 compute-0 ceph-mon[74654]: pgmap v222: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 6.2 KiB/s rd, 127 B/s wr, 11 op/s; 41 B/s, 1 objects/s recovering
Nov 29 06:22:06 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Nov 29 06:22:06 compute-0 ceph-mon[74654]: 5.1f scrub starts
Nov 29 06:22:06 compute-0 ceph-mon[74654]: 5.1f scrub ok
Nov 29 06:22:06 compute-0 ceph-mon[74654]: 7.1d deep-scrub starts
Nov 29 06:22:06 compute-0 ceph-mon[74654]: 7.1d deep-scrub ok
Nov 29 06:22:06 compute-0 ceph-mon[74654]: pgmap v223: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 5.4 KiB/s rd, 110 B/s wr, 9 op/s; 35 B/s, 0 objects/s recovering
Nov 29 06:22:06 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Nov 29 06:22:06 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Nov 29 06:22:06 compute-0 ceph-mon[74654]: osdmap e73: 3 total, 3 up, 3 in
Nov 29 06:22:07 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:22:07 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 06:22:07 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e73 do_prune osdmap full prune enabled
Nov 29 06:22:07 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:22:07 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0) v1
Nov 29 06:22:07 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:22:07 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:22:07 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:22:07.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:22:07 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Nov 29 06:22:07 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e74 e74: 3 total, 3 up, 3 in
Nov 29 06:22:07 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e74: 3 total, 3 up, 3 in
Nov 29 06:22:07 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 74 pg[9.1d( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=5 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=74) [2]/[1] r=0 lpr=74 pi=[58,74)/1 crt=56'1130 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:22:07 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 74 pg[9.1d( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=5 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=74) [2]/[1] r=0 lpr=74 pi=[58,74)/1 crt=56'1130 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 06:22:07 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 74 pg[9.5( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=6 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=74) [2]/[1] r=0 lpr=74 pi=[58,74)/1 crt=56'1130 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:22:07 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 74 pg[9.5( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=6 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=74) [2]/[1] r=0 lpr=74 pi=[58,74)/1 crt=56'1130 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 06:22:07 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 74 pg[9.15( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=5 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=74) [2]/[1] r=0 lpr=74 pi=[58,74)/1 crt=56'1130 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:22:07 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 74 pg[9.15( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=5 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=74) [2]/[1] r=0 lpr=74 pi=[58,74)/1 crt=56'1130 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 06:22:07 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 74 pg[9.d( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=6 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=74) [2]/[1] r=0 lpr=74 pi=[58,74)/1 crt=56'1130 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:22:07 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 74 pg[9.d( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=6 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=74) [2]/[1] r=0 lpr=74 pi=[58,74)/1 crt=56'1130 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 06:22:07 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 5.1e scrub starts
Nov 29 06:22:07 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:22:07 compute-0 ceph-mgr[74948]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Nov 29 06:22:07 compute-0 ceph-mgr[74948]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Nov 29 06:22:07 compute-0 ceph-mgr[74948]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Nov 29 06:22:07 compute-0 ceph-mgr[74948]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Nov 29 06:22:07 compute-0 ceph-mgr[74948]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.rgw.default.compute-0.uyqrbs on compute-0
Nov 29 06:22:07 compute-0 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.rgw.default.compute-0.uyqrbs on compute-0
Nov 29 06:22:08 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 5.1e scrub ok
Nov 29 06:22:08 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v226: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 151 B/s, 4 objects/s recovering
Nov 29 06:22:08 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} v 0) v1
Nov 29 06:22:08 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Nov 29 06:22:08 compute-0 ceph-mon[74654]: 5.17 scrub starts
Nov 29 06:22:08 compute-0 ceph-mon[74654]: 5.17 scrub ok
Nov 29 06:22:08 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:22:08 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:22:08 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Nov 29 06:22:08 compute-0 ceph-mon[74654]: osdmap e74: 3 total, 3 up, 3 in
Nov 29 06:22:08 compute-0 sudo[96875]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:22:08 compute-0 sudo[96875]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:22:08 compute-0 sudo[96875]: pam_unix(sudo:session): session closed for user root
Nov 29 06:22:08 compute-0 sudo[96904]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:22:08 compute-0 sudo[96904]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:22:08 compute-0 sudo[96904]: pam_unix(sudo:session): session closed for user root
Nov 29 06:22:08 compute-0 sudo[96932]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:22:08 compute-0 sudo[96932]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:22:08 compute-0 sudo[96932]: pam_unix(sudo:session): session closed for user root
Nov 29 06:22:08 compute-0 sudo[96961]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/keepalived:2.2.4 --timeout 895 _orch deploy --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047
Nov 29 06:22:08 compute-0 sudo[96961]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:22:08 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e74 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 06:22:08 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e74 do_prune osdmap full prune enabled
Nov 29 06:22:08 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:22:08 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:22:08 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:22:08.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:22:09 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Nov 29 06:22:09 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e75 e75: 3 total, 3 up, 3 in
Nov 29 06:22:09 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e75: 3 total, 3 up, 3 in
Nov 29 06:22:09 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 75 pg[9.16( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=5 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=75 pruub=9.681794167s) [0] r=-1 lpr=75 pi=[58,75)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active pruub 223.191467285s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:22:09 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 75 pg[9.16( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=5 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=75 pruub=9.681725502s) [0] r=-1 lpr=75 pi=[58,75)/1 crt=56'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 223.191467285s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 06:22:09 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 75 pg[9.6( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=6 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=75 pruub=9.680493355s) [0] r=-1 lpr=75 pi=[58,75)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active pruub 223.191268921s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:22:09 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 75 pg[9.6( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=6 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=75 pruub=9.680408478s) [0] r=-1 lpr=75 pi=[58,75)/1 crt=56'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 223.191268921s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 06:22:09 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 75 pg[9.1e( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=5 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=75 pruub=9.679621696s) [0] r=-1 lpr=75 pi=[58,75)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active pruub 223.190826416s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:22:09 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 75 pg[9.1e( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=5 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=75 pruub=9.679498672s) [0] r=-1 lpr=75 pi=[58,75)/1 crt=56'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 223.190826416s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 06:22:09 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 75 pg[9.e( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=6 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=75 pruub=9.679297447s) [0] r=-1 lpr=75 pi=[58,75)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active pruub 223.190750122s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:22:09 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 75 pg[9.e( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=6 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=75 pruub=9.679251671s) [0] r=-1 lpr=75 pi=[58,75)/1 crt=56'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 223.190750122s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 06:22:09 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 75 pg[9.15( v 56'1130 (0'0,56'1130] local-lis/les=74/75 n=5 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=74) [2]/[1] async=[2] r=0 lpr=74 pi=[58,74)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:22:09 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 75 pg[9.d( v 56'1130 (0'0,56'1130] local-lis/les=74/75 n=6 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=74) [2]/[1] async=[2] r=0 lpr=74 pi=[58,74)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:22:09 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 75 pg[9.5( v 56'1130 (0'0,56'1130] local-lis/les=74/75 n=6 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=74) [2]/[1] async=[2] r=0 lpr=74 pi=[58,74)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:22:09 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 75 pg[9.1d( v 56'1130 (0'0,56'1130] local-lis/les=74/75 n=5 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=74) [2]/[1] async=[2] r=0 lpr=74 pi=[58,74)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:22:09 compute-0 ceph-mon[74654]: 5.1e scrub starts
Nov 29 06:22:09 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:22:09 compute-0 ceph-mon[74654]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Nov 29 06:22:09 compute-0 ceph-mon[74654]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Nov 29 06:22:09 compute-0 ceph-mon[74654]: Deploying daemon keepalived.rgw.default.compute-0.uyqrbs on compute-0
Nov 29 06:22:09 compute-0 ceph-mon[74654]: 5.1e scrub ok
Nov 29 06:22:09 compute-0 ceph-mon[74654]: pgmap v226: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 151 B/s, 4 objects/s recovering
Nov 29 06:22:09 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Nov 29 06:22:09 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Nov 29 06:22:09 compute-0 ceph-mon[74654]: osdmap e75: 3 total, 3 up, 3 in
Nov 29 06:22:09 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:22:09 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:22:09 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:22:09.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:22:09 compute-0 ceph-mgr[74948]: [progress INFO root] Writing back 21 completed events
Nov 29 06:22:09 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 29 06:22:09 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:22:10 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e75 do_prune osdmap full prune enabled
Nov 29 06:22:10 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v228: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:22:10 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} v 0) v1
Nov 29 06:22:10 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Nov 29 06:22:10 compute-0 python3.9[97260]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:22:10 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:22:10 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:22:10 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:22:10.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:22:10 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e76 e76: 3 total, 3 up, 3 in
Nov 29 06:22:10 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e76: 3 total, 3 up, 3 in
Nov 29 06:22:11 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 76 pg[9.6( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=6 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=76) [0]/[1] r=0 lpr=76 pi=[58,76)/1 crt=56'1130 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:22:11 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 76 pg[9.6( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=6 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=76) [0]/[1] r=0 lpr=76 pi=[58,76)/1 crt=56'1130 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 06:22:11 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 76 pg[9.e( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=6 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=76) [0]/[1] r=0 lpr=76 pi=[58,76)/1 crt=56'1130 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:22:11 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 76 pg[9.e( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=6 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=76) [0]/[1] r=0 lpr=76 pi=[58,76)/1 crt=56'1130 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 06:22:11 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 76 pg[9.1e( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=5 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=76) [0]/[1] r=0 lpr=76 pi=[58,76)/1 crt=56'1130 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:22:11 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 76 pg[9.16( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=5 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=76) [0]/[1] r=0 lpr=76 pi=[58,76)/1 crt=56'1130 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:22:11 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 76 pg[9.1e( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=5 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=76) [0]/[1] r=0 lpr=76 pi=[58,76)/1 crt=56'1130 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 06:22:11 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 76 pg[9.16( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=5 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=76) [0]/[1] r=0 lpr=76 pi=[58,76)/1 crt=56'1130 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 06:22:11 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:22:11 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:22:11 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:22:11.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:22:11 compute-0 python3.9[97425]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 06:22:11 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e76 do_prune osdmap full prune enabled
Nov 29 06:22:11 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:22:11 compute-0 ceph-mon[74654]: pgmap v228: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:22:11 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Nov 29 06:22:11 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 5.1d scrub starts
Nov 29 06:22:11 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 5.1d scrub ok
Nov 29 06:22:12 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v230: 305 pgs: 1 active+recovering+remapped, 1 active+remapped, 2 active+recovery_wait+remapped, 301 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 15/215 objects misplaced (6.977%); 38 B/s, 1 objects/s recovering
Nov 29 06:22:12 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Nov 29 06:22:12 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e77 e77: 3 total, 3 up, 3 in
Nov 29 06:22:12 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e77: 3 total, 3 up, 3 in
Nov 29 06:22:12 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 77 pg[9.5( v 56'1130 (0'0,56'1130] local-lis/les=74/75 n=6 ec=58/47 lis/c=74/58 les/c/f=75/59/0 sis=77 pruub=12.818251610s) [2] async=[2] r=-1 lpr=77 pi=[58,77)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active pruub 229.706832886s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:22:12 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 77 pg[9.1d( v 56'1130 (0'0,56'1130] local-lis/les=74/75 n=5 ec=58/47 lis/c=74/58 les/c/f=75/59/0 sis=77 pruub=12.818084717s) [2] async=[2] r=-1 lpr=77 pi=[58,77)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active pruub 229.706848145s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:22:12 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 77 pg[9.15( v 56'1130 (0'0,56'1130] local-lis/les=74/75 n=5 ec=58/47 lis/c=74/58 les/c/f=75/59/0 sis=77 pruub=12.813915253s) [2] async=[2] r=-1 lpr=77 pi=[58,77)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active pruub 229.702682495s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:22:12 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 77 pg[9.15( v 56'1130 (0'0,56'1130] local-lis/les=74/75 n=5 ec=58/47 lis/c=74/58 les/c/f=75/59/0 sis=77 pruub=12.813832283s) [2] r=-1 lpr=77 pi=[58,77)/1 crt=56'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 229.702682495s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 06:22:12 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 77 pg[9.d( v 56'1130 (0'0,56'1130] local-lis/les=74/75 n=6 ec=58/47 lis/c=74/58 les/c/f=75/59/0 sis=77 pruub=12.813839912s) [2] async=[2] r=-1 lpr=77 pi=[58,77)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active pruub 229.702865601s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:22:12 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 77 pg[9.d( v 56'1130 (0'0,56'1130] local-lis/les=74/75 n=6 ec=58/47 lis/c=74/58 les/c/f=75/59/0 sis=77 pruub=12.813767433s) [2] r=-1 lpr=77 pi=[58,77)/1 crt=56'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 229.702865601s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 06:22:12 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 77 pg[9.5( v 56'1130 (0'0,56'1130] local-lis/les=74/75 n=6 ec=58/47 lis/c=74/58 les/c/f=75/59/0 sis=77 pruub=12.817247391s) [2] r=-1 lpr=77 pi=[58,77)/1 crt=56'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 229.706832886s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 06:22:12 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 77 pg[9.1d( v 56'1130 (0'0,56'1130] local-lis/les=74/75 n=5 ec=58/47 lis/c=74/58 les/c/f=75/59/0 sis=77 pruub=12.817220688s) [2] r=-1 lpr=77 pi=[58,77)/1 crt=56'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 229.706848145s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 06:22:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 06:22:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:22:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 06:22:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:22:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:22:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:22:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:22:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:22:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:22:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:22:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:22:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:22:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 06:22:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:22:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:22:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:22:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Nov 29 06:22:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:22:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 06:22:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:22:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:22:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:22:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 06:22:12 compute-0 podman[97045]: 2025-11-29 06:22:12.6428453 +0000 UTC m=+4.102556115 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Nov 29 06:22:12 compute-0 podman[97045]: 2025-11-29 06:22:12.725327411 +0000 UTC m=+4.185038156 container create ad3d38a391c76f75f60c72e0c20e2421f402cca1289b400416279b4fc18d2251 (image=quay.io/ceph/keepalived:2.2.4, name=sleepy_hypatia, com.redhat.component=keepalived-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.28.2, name=keepalived, io.openshift.tags=Ceph keepalived, description=keepalived for Ceph, io.openshift.expose-services=, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, version=2.2.4, release=1793, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, build-date=2023-02-22T09:23:20)
Nov 29 06:22:12 compute-0 systemd[76267]: Created slice User Background Tasks Slice.
Nov 29 06:22:12 compute-0 systemd[76267]: Starting Cleanup of User's Temporary Files and Directories...
Nov 29 06:22:12 compute-0 systemd[1]: Started libpod-conmon-ad3d38a391c76f75f60c72e0c20e2421f402cca1289b400416279b4fc18d2251.scope.
Nov 29 06:22:12 compute-0 systemd[76267]: Finished Cleanup of User's Temporary Files and Directories.
Nov 29 06:22:12 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:22:12 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:22:12 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:22:12 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:22:12.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:22:12 compute-0 podman[97045]: 2025-11-29 06:22:12.815814852 +0000 UTC m=+4.275525607 container init ad3d38a391c76f75f60c72e0c20e2421f402cca1289b400416279b4fc18d2251 (image=quay.io/ceph/keepalived:2.2.4, name=sleepy_hypatia, vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, io.k8s.display-name=Keepalived on RHEL 9, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, name=keepalived, release=1793, summary=Provides keepalived on RHEL 9 for Ceph., distribution-scope=public, com.redhat.component=keepalived-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.buildah.version=1.28.2, version=2.2.4, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=Ceph keepalived, description=keepalived for Ceph)
Nov 29 06:22:12 compute-0 podman[97045]: 2025-11-29 06:22:12.824824111 +0000 UTC m=+4.284534846 container start ad3d38a391c76f75f60c72e0c20e2421f402cca1289b400416279b4fc18d2251 (image=quay.io/ceph/keepalived:2.2.4, name=sleepy_hypatia, version=2.2.4, io.openshift.tags=Ceph keepalived, vendor=Red Hat, Inc., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, summary=Provides keepalived on RHEL 9 for Ceph., build-date=2023-02-22T09:23:20, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, release=1793, architecture=x86_64, description=keepalived for Ceph, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, name=keepalived, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, com.redhat.component=keepalived-container, io.buildah.version=1.28.2)
Nov 29 06:22:12 compute-0 sleepy_hypatia[97587]: 0 0
Nov 29 06:22:12 compute-0 systemd[1]: libpod-ad3d38a391c76f75f60c72e0c20e2421f402cca1289b400416279b4fc18d2251.scope: Deactivated successfully.
Nov 29 06:22:12 compute-0 podman[97045]: 2025-11-29 06:22:12.832937414 +0000 UTC m=+4.292648179 container attach ad3d38a391c76f75f60c72e0c20e2421f402cca1289b400416279b4fc18d2251 (image=quay.io/ceph/keepalived:2.2.4, name=sleepy_hypatia, io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, summary=Provides keepalived on RHEL 9 for Ceph., architecture=x86_64, distribution-scope=public, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc., io.k8s.display-name=Keepalived on RHEL 9, version=2.2.4, release=1793, vcs-type=git, io.buildah.version=1.28.2, name=keepalived, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=keepalived-container, io.openshift.expose-services=)
Nov 29 06:22:12 compute-0 podman[97045]: 2025-11-29 06:22:12.833452579 +0000 UTC m=+4.293163324 container died ad3d38a391c76f75f60c72e0c20e2421f402cca1289b400416279b4fc18d2251 (image=quay.io/ceph/keepalived:2.2.4, name=sleepy_hypatia, com.redhat.component=keepalived-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.buildah.version=1.28.2, name=keepalived, release=1793, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, architecture=x86_64, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, vendor=Red Hat, Inc., version=2.2.4, io.openshift.expose-services=, io.k8s.display-name=Keepalived on RHEL 9, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, distribution-scope=public, io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 29 06:22:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-14a0be28cbe8a27d1ba6b7d6e055081dbc513e533240cc7f364122d015dcc029-merged.mount: Deactivated successfully.
Nov 29 06:22:13 compute-0 podman[97045]: 2025-11-29 06:22:13.02450018 +0000 UTC m=+4.484210925 container remove ad3d38a391c76f75f60c72e0c20e2421f402cca1289b400416279b4fc18d2251 (image=quay.io/ceph/keepalived:2.2.4, name=sleepy_hypatia, vendor=Red Hat, Inc., name=keepalived, io.openshift.expose-services=, description=keepalived for Ceph, summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.tags=Ceph keepalived, io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, distribution-scope=public, architecture=x86_64, build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=2.2.4, com.redhat.component=keepalived-container, vcs-type=git, io.k8s.display-name=Keepalived on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793)
Nov 29 06:22:13 compute-0 systemd[1]: libpod-conmon-ad3d38a391c76f75f60c72e0c20e2421f402cca1289b400416279b4fc18d2251.scope: Deactivated successfully.
Nov 29 06:22:13 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 77 pg[9.6( v 56'1130 (0'0,56'1130] local-lis/les=76/77 n=6 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=76) [0]/[1] async=[0] r=0 lpr=76 pi=[58,76)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:22:13 compute-0 python3.9[97617]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 06:22:13 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 77 pg[9.1e( v 56'1130 (0'0,56'1130] local-lis/les=76/77 n=5 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=76) [0]/[1] async=[0] r=0 lpr=76 pi=[58,76)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:22:13 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 77 pg[9.e( v 56'1130 (0'0,56'1130] local-lis/les=76/77 n=6 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=76) [0]/[1] async=[0] r=0 lpr=76 pi=[58,76)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:22:13 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 77 pg[9.16( v 56'1130 (0'0,56'1130] local-lis/les=76/77 n=5 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=76) [0]/[1] async=[0] r=0 lpr=76 pi=[58,76)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:22:13 compute-0 systemd[1]: Reloading.
Nov 29 06:22:13 compute-0 systemd-rc-local-generator[97666]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 06:22:13 compute-0 systemd-sysv-generator[97669]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 06:22:13 compute-0 ceph-mon[74654]: osdmap e76: 3 total, 3 up, 3 in
Nov 29 06:22:13 compute-0 ceph-mon[74654]: 5.1d scrub starts
Nov 29 06:22:13 compute-0 ceph-mon[74654]: 5.1d scrub ok
Nov 29 06:22:13 compute-0 ceph-mon[74654]: pgmap v230: 305 pgs: 1 active+recovering+remapped, 1 active+remapped, 2 active+recovery_wait+remapped, 301 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 15/215 objects misplaced (6.977%); 38 B/s, 1 objects/s recovering
Nov 29 06:22:13 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Nov 29 06:22:13 compute-0 ceph-mon[74654]: osdmap e77: 3 total, 3 up, 3 in
Nov 29 06:22:13 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e77 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 06:22:13 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e77 do_prune osdmap full prune enabled
Nov 29 06:22:13 compute-0 systemd[1]: Reloading.
Nov 29 06:22:13 compute-0 systemd-rc-local-generator[97731]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 06:22:13 compute-0 systemd-sysv-generator[97736]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 06:22:13 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:22:13 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:22:13 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:22:13.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:22:13 compute-0 systemd[1]: Starting Ceph keepalived.rgw.default.compute-0.uyqrbs for 336ec58c-893b-528f-a0c1-6ed1196bc047...
Nov 29 06:22:13 compute-0 sudo[97926]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ytqulqxbslhzfvtjbpxegtjwudlngaet ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397333.616573-338-162493875568104/AnsiballZ_setup.py'
Nov 29 06:22:13 compute-0 sudo[97926]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:22:13 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 3.19 scrub starts
Nov 29 06:22:13 compute-0 podman[97880]: 2025-11-29 06:22:13.86927605 +0000 UTC m=+0.021187410 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Nov 29 06:22:13 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 3.19 scrub ok
Nov 29 06:22:14 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v232: 305 pgs: 1 active+recovering+remapped, 1 active+remapped, 2 active+recovery_wait+remapped, 301 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 15/215 objects misplaced (6.977%); 36 B/s, 1 objects/s recovering
Nov 29 06:22:14 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:22:14 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:22:14 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:22:14.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:22:14 compute-0 ceph-mgr[74948]: [progress WARNING root] Starting Global Recovery Event,4 pgs not in active + clean state
Nov 29 06:22:15 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:22:15 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:22:15 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:22:15.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:22:15 compute-0 python3.9[97928]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 06:22:16 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v233: 305 pgs: 1 active+recovering+remapped, 5 active+remapped, 2 active+recovery_wait+remapped, 297 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 15/215 objects misplaced (6.977%); 63 B/s, 4 objects/s recovering
Nov 29 06:22:16 compute-0 sudo[97926]: pam_unix(sudo:session): session closed for user root
Nov 29 06:22:16 compute-0 sudo[98010]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-foipzwjjpylxoyedcgyaudkanzvsxstl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397333.616573-338-162493875568104/AnsiballZ_dnf.py'
Nov 29 06:22:16 compute-0 sudo[98010]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:22:16 compute-0 python3.9[98012]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 06:22:16 compute-0 podman[97880]: 2025-11-29 06:22:16.725650908 +0000 UTC m=+2.877562238 container create c5da9d8380f0eb7ca78841b66eaacc1789ab9c8fb67eaab27657426fdf51bade (image=quay.io/ceph/keepalived:2.2.4, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-keepalived-rgw-default-compute-0-uyqrbs, build-date=2023-02-22T09:23:20, release=1793, version=2.2.4, description=keepalived for Ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, summary=Provides keepalived on RHEL 9 for Ceph., name=keepalived, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=Ceph keepalived, com.redhat.component=keepalived-container, io.buildah.version=1.28.2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git)
Nov 29 06:22:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59275f770a1a56dcc7697791c45a93f5dc6caab1bfa9bfceb0efcfcbcaa4aac0/merged/etc/keepalived/keepalived.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:22:16 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:22:16 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:22:16 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:22:16.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:22:16 compute-0 podman[97880]: 2025-11-29 06:22:16.805153733 +0000 UTC m=+2.957065153 container init c5da9d8380f0eb7ca78841b66eaacc1789ab9c8fb67eaab27657426fdf51bade (image=quay.io/ceph/keepalived:2.2.4, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-keepalived-rgw-default-compute-0-uyqrbs, description=keepalived for Ceph, release=1793, com.redhat.component=keepalived-container, io.buildah.version=1.28.2, distribution-scope=public, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., io.openshift.expose-services=, version=2.2.4, architecture=x86_64, io.k8s.display-name=Keepalived on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=Ceph keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9)
Nov 29 06:22:16 compute-0 podman[97880]: 2025-11-29 06:22:16.811006971 +0000 UTC m=+2.962918331 container start c5da9d8380f0eb7ca78841b66eaacc1789ab9c8fb67eaab27657426fdf51bade (image=quay.io/ceph/keepalived:2.2.4, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-keepalived-rgw-default-compute-0-uyqrbs, io.openshift.tags=Ceph keepalived, name=keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, distribution-scope=public, vcs-type=git, version=2.2.4, io.buildah.version=1.28.2, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph., build-date=2023-02-22T09:23:20, description=keepalived for Ceph, com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1793, vendor=Red Hat, Inc., io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793)
Nov 29 06:22:16 compute-0 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-keepalived-rgw-default-compute-0-uyqrbs[98016]: Sat Nov 29 06:22:16 2025: Starting Keepalived v2.2.4 (08/21,2021)
Nov 29 06:22:16 compute-0 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-keepalived-rgw-default-compute-0-uyqrbs[98016]: Sat Nov 29 06:22:16 2025: Running on Linux 5.14.0-642.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Thu Nov 20 14:15:03 UTC 2025 (built for Linux 5.14.0)
Nov 29 06:22:16 compute-0 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-keepalived-rgw-default-compute-0-uyqrbs[98016]: Sat Nov 29 06:22:16 2025: Command line: '/usr/sbin/keepalived' '-n' '-l' '-f' '/etc/keepalived/keepalived.conf'
Nov 29 06:22:16 compute-0 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-keepalived-rgw-default-compute-0-uyqrbs[98016]: Sat Nov 29 06:22:16 2025: Configuration file /etc/keepalived/keepalived.conf
Nov 29 06:22:16 compute-0 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-keepalived-rgw-default-compute-0-uyqrbs[98016]: Sat Nov 29 06:22:16 2025: NOTICE: setting config option max_auto_priority should result in better keepalived performance
Nov 29 06:22:16 compute-0 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-keepalived-rgw-default-compute-0-uyqrbs[98016]: Sat Nov 29 06:22:16 2025: Starting VRRP child process, pid=4
Nov 29 06:22:16 compute-0 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-keepalived-rgw-default-compute-0-uyqrbs[98016]: Sat Nov 29 06:22:16 2025: Startup complete
Nov 29 06:22:16 compute-0 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-keepalived-rgw-default-compute-0-uyqrbs[98016]: Sat Nov 29 06:22:16 2025: (VI_0) Entering BACKUP STATE (init)
Nov 29 06:22:16 compute-0 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-keepalived-rgw-default-compute-0-uyqrbs[98016]: Sat Nov 29 06:22:16 2025: VRRP_Script(check_backend) succeeded
Nov 29 06:22:16 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e78 e78: 3 total, 3 up, 3 in
Nov 29 06:22:16 compute-0 bash[97880]: c5da9d8380f0eb7ca78841b66eaacc1789ab9c8fb67eaab27657426fdf51bade
Nov 29 06:22:16 compute-0 ceph-mon[74654]: 5.15 scrub starts
Nov 29 06:22:16 compute-0 ceph-mon[74654]: 5.15 scrub ok
Nov 29 06:22:16 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e78: 3 total, 3 up, 3 in
Nov 29 06:22:16 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 78 pg[9.16( v 56'1130 (0'0,56'1130] local-lis/les=76/77 n=5 ec=58/47 lis/c=76/58 les/c/f=77/59/0 sis=78 pruub=12.191514969s) [0] async=[0] r=-1 lpr=78 pi=[58,78)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active pruub 233.521408081s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:22:16 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 78 pg[9.16( v 56'1130 (0'0,56'1130] local-lis/les=76/77 n=5 ec=58/47 lis/c=76/58 les/c/f=77/59/0 sis=78 pruub=12.191367149s) [0] r=-1 lpr=78 pi=[58,78)/1 crt=56'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 233.521408081s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 06:22:16 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 78 pg[9.e( v 56'1130 (0'0,56'1130] local-lis/les=76/77 n=6 ec=58/47 lis/c=76/58 les/c/f=77/59/0 sis=78 pruub=12.190675735s) [0] async=[0] r=-1 lpr=78 pi=[58,78)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active pruub 233.521423340s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:22:16 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 78 pg[9.e( v 56'1130 (0'0,56'1130] local-lis/les=76/77 n=6 ec=58/47 lis/c=76/58 les/c/f=77/59/0 sis=78 pruub=12.190603256s) [0] r=-1 lpr=78 pi=[58,78)/1 crt=56'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 233.521423340s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 06:22:16 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 78 pg[9.6( v 56'1130 (0'0,56'1130] local-lis/les=76/77 n=6 ec=58/47 lis/c=76/58 les/c/f=77/59/0 sis=78 pruub=12.183088303s) [0] async=[0] r=-1 lpr=78 pi=[58,78)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active pruub 233.514236450s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:22:16 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 78 pg[9.1e( v 56'1130 (0'0,56'1130] local-lis/les=76/77 n=5 ec=58/47 lis/c=76/58 les/c/f=77/59/0 sis=78 pruub=12.190187454s) [0] async=[0] r=-1 lpr=78 pi=[58,78)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active pruub 233.521392822s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:22:16 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 78 pg[9.6( v 56'1130 (0'0,56'1130] local-lis/les=76/77 n=6 ec=58/47 lis/c=76/58 les/c/f=77/59/0 sis=78 pruub=12.183005333s) [0] r=-1 lpr=78 pi=[58,78)/1 crt=56'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 233.514236450s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 06:22:16 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 78 pg[9.1e( v 56'1130 (0'0,56'1130] local-lis/les=76/77 n=5 ec=58/47 lis/c=76/58 les/c/f=77/59/0 sis=78 pruub=12.190085411s) [0] r=-1 lpr=78 pi=[58,78)/1 crt=56'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 233.521392822s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 06:22:16 compute-0 systemd[1]: Started Ceph keepalived.rgw.default.compute-0.uyqrbs for 336ec58c-893b-528f-a0c1-6ed1196bc047.
Nov 29 06:22:17 compute-0 sudo[96961]: pam_unix(sudo:session): session closed for user root
Nov 29 06:22:17 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 06:22:17 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:22:17 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:22:17 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:22:17.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:22:17 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:22:17 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e78 do_prune osdmap full prune enabled
Nov 29 06:22:17 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 06:22:17 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 5.5 scrub starts
Nov 29 06:22:17 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 5.5 scrub ok
Nov 29 06:22:18 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v235: 305 pgs: 1 active+recovering+remapped, 5 active+remapped, 2 active+recovery_wait+remapped, 297 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 15/215 objects misplaced (6.977%); 30 B/s, 2 objects/s recovering
Nov 29 06:22:18 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e79 e79: 3 total, 3 up, 3 in
Nov 29 06:22:18 compute-0 ceph-mon[74654]: 3.19 scrub starts
Nov 29 06:22:18 compute-0 ceph-mon[74654]: 3.19 scrub ok
Nov 29 06:22:18 compute-0 ceph-mon[74654]: pgmap v232: 305 pgs: 1 active+recovering+remapped, 1 active+remapped, 2 active+recovery_wait+remapped, 301 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 15/215 objects misplaced (6.977%); 36 B/s, 1 objects/s recovering
Nov 29 06:22:18 compute-0 ceph-mon[74654]: 7.5 scrub starts
Nov 29 06:22:18 compute-0 ceph-mon[74654]: 7.5 scrub ok
Nov 29 06:22:18 compute-0 ceph-mon[74654]: 5.18 scrub starts
Nov 29 06:22:18 compute-0 ceph-mon[74654]: 5.18 scrub ok
Nov 29 06:22:18 compute-0 ceph-mon[74654]: pgmap v233: 305 pgs: 1 active+recovering+remapped, 5 active+remapped, 2 active+recovery_wait+remapped, 297 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 15/215 objects misplaced (6.977%); 63 B/s, 4 objects/s recovering
Nov 29 06:22:18 compute-0 ceph-mon[74654]: 5.1 scrub starts
Nov 29 06:22:18 compute-0 ceph-mon[74654]: 5.1 scrub ok
Nov 29 06:22:18 compute-0 ceph-mon[74654]: 7.a scrub starts
Nov 29 06:22:18 compute-0 ceph-mon[74654]: 7.a scrub ok
Nov 29 06:22:18 compute-0 ceph-mon[74654]: osdmap e78: 3 total, 3 up, 3 in
Nov 29 06:22:18 compute-0 ceph-mon[74654]: 3.5 scrub starts
Nov 29 06:22:18 compute-0 ceph-mon[74654]: 3.5 scrub ok
Nov 29 06:22:18 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:22:18 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:22:18 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e79: 3 total, 3 up, 3 in
Nov 29 06:22:18 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0) v1
Nov 29 06:22:18 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e79 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 06:22:18 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:22:18 compute-0 ceph-mgr[74948]: [progress INFO root] complete: finished ev 69c26498-5953-4c32-b667-91684388cce7 (Updating ingress.rgw.default deployment (+4 -> 4))
Nov 29 06:22:18 compute-0 ceph-mgr[74948]: [progress INFO root] Completed event 69c26498-5953-4c32-b667-91684388cce7 (Updating ingress.rgw.default deployment (+4 -> 4)) in 64 seconds
Nov 29 06:22:18 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0) v1
Nov 29 06:22:18 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:22:18 compute-0 sudo[98043]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:22:18 compute-0 sudo[98042]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:22:18 compute-0 sudo[98043]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:22:18 compute-0 sudo[98042]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:22:18 compute-0 sudo[98043]: pam_unix(sudo:session): session closed for user root
Nov 29 06:22:18 compute-0 sudo[98042]: pam_unix(sudo:session): session closed for user root
Nov 29 06:22:18 compute-0 sudo[98095]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:22:18 compute-0 sudo[98096]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 06:22:18 compute-0 sudo[98095]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:22:18 compute-0 sudo[98096]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:22:18 compute-0 sudo[98095]: pam_unix(sudo:session): session closed for user root
Nov 29 06:22:18 compute-0 sudo[98096]: pam_unix(sudo:session): session closed for user root
Nov 29 06:22:18 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:22:18 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:22:18 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:22:18.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:22:18 compute-0 sudo[98149]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:22:18 compute-0 sudo[98149]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:22:18 compute-0 sudo[98149]: pam_unix(sudo:session): session closed for user root
Nov 29 06:22:18 compute-0 sudo[98176]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:22:18 compute-0 sudo[98176]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:22:18 compute-0 sudo[98176]: pam_unix(sudo:session): session closed for user root
Nov 29 06:22:18 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 3.4 scrub starts
Nov 29 06:22:19 compute-0 sudo[98201]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:22:19 compute-0 sudo[98201]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:22:19 compute-0 sudo[98201]: pam_unix(sudo:session): session closed for user root
Nov 29 06:22:19 compute-0 sudo[98227]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 29 06:22:19 compute-0 sudo[98227]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:22:19 compute-0 sshd-session[98145]: Invalid user sammy from 138.124.186.225 port 49496
Nov 29 06:22:19 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:22:19 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:22:19 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:22:19.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:22:19 compute-0 sshd-session[98145]: Received disconnect from 138.124.186.225 port 49496:11: Bye Bye [preauth]
Nov 29 06:22:19 compute-0 sshd-session[98145]: Disconnected from invalid user sammy 138.124.186.225 port 49496 [preauth]
Nov 29 06:22:20 compute-0 ceph-mgr[74948]: [progress INFO root] Writing back 22 completed events
Nov 29 06:22:20 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v237: 305 pgs: 4 peering, 4 active+remapped, 297 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 78 B/s, 4 objects/s recovering
Nov 29 06:22:20 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 29 06:22:20 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 3.4 scrub ok
Nov 29 06:22:20 compute-0 podman[98324]: 2025-11-29 06:22:20.215703367 +0000 UTC m=+0.625013525 container exec c3c8680245c67f710ba1b448e2d4c77c4c02bc368d31276f0332ad942957e3cf (image=quay.io/ceph/ceph:v18, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mon-compute-0, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 06:22:20 compute-0 podman[98324]: 2025-11-29 06:22:20.340344839 +0000 UTC m=+0.749654997 container exec_died c3c8680245c67f710ba1b448e2d4c77c4c02bc368d31276f0332ad942957e3cf (image=quay.io/ceph/ceph:v18, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 29 06:22:20 compute-0 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-keepalived-rgw-default-compute-0-uyqrbs[98016]: Sat Nov 29 06:22:20 2025: (VI_0) Entering MASTER STATE
Nov 29 06:22:20 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 06:22:20 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:22:20 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:22:20 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:22:20.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:22:21 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 3.1e scrub starts
Nov 29 06:22:21 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 3.1e scrub ok
Nov 29 06:22:21 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:22:21 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:22:21 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:22:21.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:22:22 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v238: 305 pgs: 4 peering, 301 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 74 B/s, 4 objects/s recovering
Nov 29 06:22:22 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 06:22:22 compute-0 sshd-session[98508]: Invalid user kingbase from 103.147.159.91 port 52716
Nov 29 06:22:22 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:22:22 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:22:22 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:22:22.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:22:22 compute-0 podman[98492]: 2025-11-29 06:22:22.887942932 +0000 UTC m=+1.836599458 container exec f5b8edcc79df1f136246f04a71d5e10f6a214865dd4162430c1b6090267d988f (image=quay.io/ceph/haproxy:2.3, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-haproxy-rgw-default-compute-0-zzbnoj)
Nov 29 06:22:23 compute-0 ceph-mon[74654]: 7.16 scrub starts
Nov 29 06:22:23 compute-0 ceph-mon[74654]: 7.16 scrub ok
Nov 29 06:22:23 compute-0 ceph-mon[74654]: 5.5 scrub starts
Nov 29 06:22:23 compute-0 ceph-mon[74654]: 5.5 scrub ok
Nov 29 06:22:23 compute-0 ceph-mon[74654]: pgmap v235: 305 pgs: 1 active+recovering+remapped, 5 active+remapped, 2 active+recovery_wait+remapped, 297 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 15/215 objects misplaced (6.977%); 30 B/s, 2 objects/s recovering
Nov 29 06:22:23 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:22:23 compute-0 ceph-mon[74654]: osdmap e79: 3 total, 3 up, 3 in
Nov 29 06:22:23 compute-0 ceph-mon[74654]: 7.1f scrub starts
Nov 29 06:22:23 compute-0 ceph-mon[74654]: 7.1f scrub ok
Nov 29 06:22:23 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:22:23 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:22:23 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 3.7 scrub starts
Nov 29 06:22:23 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 3.7 scrub ok
Nov 29 06:22:23 compute-0 sshd-session[98508]: Received disconnect from 103.147.159.91 port 52716:11: Bye Bye [preauth]
Nov 29 06:22:23 compute-0 sshd-session[98508]: Disconnected from invalid user kingbase 103.147.159.91 port 52716 [preauth]
Nov 29 06:22:23 compute-0 podman[98492]: 2025-11-29 06:22:23.147406359 +0000 UTC m=+2.096062885 container exec_died f5b8edcc79df1f136246f04a71d5e10f6a214865dd4162430c1b6090267d988f (image=quay.io/ceph/haproxy:2.3, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-haproxy-rgw-default-compute-0-zzbnoj)
Nov 29 06:22:23 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:22:23 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e79 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 06:22:23 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:22:23 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:22:23 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:22:23.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:22:23 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:22:23 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 06:22:23 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:22:23 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 06:22:23 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:22:24 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:22:24 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v239: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 46 B/s, 1 objects/s recovering
Nov 29 06:22:24 compute-0 ceph-mon[74654]: 3.4 scrub starts
Nov 29 06:22:24 compute-0 ceph-mon[74654]: 7.11 scrub starts
Nov 29 06:22:24 compute-0 ceph-mon[74654]: 7.11 scrub ok
Nov 29 06:22:24 compute-0 ceph-mon[74654]: pgmap v237: 305 pgs: 4 peering, 4 active+remapped, 297 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 78 B/s, 4 objects/s recovering
Nov 29 06:22:24 compute-0 ceph-mon[74654]: 3.4 scrub ok
Nov 29 06:22:24 compute-0 ceph-mon[74654]: 3.1e scrub starts
Nov 29 06:22:24 compute-0 ceph-mon[74654]: 3.1e scrub ok
Nov 29 06:22:24 compute-0 ceph-mon[74654]: pgmap v238: 305 pgs: 4 peering, 301 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 74 B/s, 4 objects/s recovering
Nov 29 06:22:24 compute-0 ceph-mon[74654]: 6.1 scrub starts
Nov 29 06:22:24 compute-0 ceph-mon[74654]: 6.1 scrub ok
Nov 29 06:22:24 compute-0 ceph-mon[74654]: 3.7 scrub starts
Nov 29 06:22:24 compute-0 ceph-mon[74654]: 3.7 scrub ok
Nov 29 06:22:24 compute-0 ceph-mon[74654]: 4.14 scrub starts
Nov 29 06:22:24 compute-0 ceph-mon[74654]: 4.14 scrub ok
Nov 29 06:22:24 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:22:24 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:22:24 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:22:24 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:22:24 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:22:24 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} v 0) v1
Nov 29 06:22:24 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Nov 29 06:22:24 compute-0 podman[98578]: 2025-11-29 06:22:24.257474055 +0000 UTC m=+0.144815273 container exec c5da9d8380f0eb7ca78841b66eaacc1789ab9c8fb67eaab27657426fdf51bade (image=quay.io/ceph/keepalived:2.2.4, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-keepalived-rgw-default-compute-0-uyqrbs, io.openshift.tags=Ceph keepalived, name=keepalived, io.openshift.expose-services=, vcs-type=git, release=1793, architecture=x86_64, io.k8s.display-name=Keepalived on RHEL 9, io.buildah.version=1.28.2, summary=Provides keepalived on RHEL 9 for Ceph., build-date=2023-02-22T09:23:20, description=keepalived for Ceph, com.redhat.component=keepalived-container, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 29 06:22:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:22:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:22:24 compute-0 podman[98578]: 2025-11-29 06:22:24.278600902 +0000 UTC m=+0.165942120 container exec_died c5da9d8380f0eb7ca78841b66eaacc1789ab9c8fb67eaab27657426fdf51bade (image=quay.io/ceph/keepalived:2.2.4, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-keepalived-rgw-default-compute-0-uyqrbs, com.redhat.component=keepalived-container, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4, summary=Provides keepalived on RHEL 9 for Ceph., description=keepalived for Ceph, name=keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, vcs-type=git, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, release=1793, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793)
Nov 29 06:22:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:22:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:22:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:22:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:22:24 compute-0 sudo[98227]: pam_unix(sudo:session): session closed for user root
Nov 29 06:22:24 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 06:22:24 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:22:24 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 06:22:24 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:22:24 compute-0 sudo[98612]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:22:24 compute-0 sudo[98612]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:22:24 compute-0 sudo[98612]: pam_unix(sudo:session): session closed for user root
Nov 29 06:22:24 compute-0 sudo[98637]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:22:24 compute-0 sudo[98637]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:22:24 compute-0 sudo[98637]: pam_unix(sudo:session): session closed for user root
Nov 29 06:22:24 compute-0 sudo[98665]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:22:24 compute-0 sudo[98665]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:22:24 compute-0 sudo[98665]: pam_unix(sudo:session): session closed for user root
Nov 29 06:22:24 compute-0 sudo[98690]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 06:22:24 compute-0 sudo[98690]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:22:24 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:22:24 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:22:24 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:22:24.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:22:25 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e79 do_prune osdmap full prune enabled
Nov 29 06:22:25 compute-0 sudo[98690]: pam_unix(sudo:session): session closed for user root
Nov 29 06:22:25 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 06:22:25 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:22:25 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 06:22:25 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 06:22:25 compute-0 ceph-mon[74654]: pgmap v239: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 46 B/s, 1 objects/s recovering
Nov 29 06:22:25 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Nov 29 06:22:25 compute-0 ceph-mon[74654]: 4.1d scrub starts
Nov 29 06:22:25 compute-0 ceph-mon[74654]: 4.1d scrub ok
Nov 29 06:22:25 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:22:25 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:22:25 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 06:22:25 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Nov 29 06:22:25 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e80 e80: 3 total, 3 up, 3 in
Nov 29 06:22:25 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e80: 3 total, 3 up, 3 in
Nov 29 06:22:25 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:22:25 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev 8825e13f-1524-4f42-96fe-4d5641d9472e does not exist
Nov 29 06:22:25 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev 274b306b-a052-4fad-935b-f622c512e3ee does not exist
Nov 29 06:22:25 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev b62b7db2-e7dd-4846-a176-bd5b9efc327a does not exist
Nov 29 06:22:25 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 06:22:25 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 06:22:25 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 06:22:25 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 06:22:25 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 06:22:25 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:22:25 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:22:25 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:22:25 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:22:25.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:22:25 compute-0 sudo[98747]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:22:25 compute-0 sudo[98747]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:22:25 compute-0 sudo[98747]: pam_unix(sudo:session): session closed for user root
Nov 29 06:22:25 compute-0 sudo[98772]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:22:25 compute-0 sudo[98772]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:22:25 compute-0 sudo[98772]: pam_unix(sudo:session): session closed for user root
Nov 29 06:22:25 compute-0 sudo[98797]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:22:25 compute-0 sudo[98797]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:22:25 compute-0 sudo[98797]: pam_unix(sudo:session): session closed for user root
Nov 29 06:22:25 compute-0 sudo[98822]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Nov 29 06:22:25 compute-0 sudo[98822]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:22:26 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v241: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 46 B/s, 1 objects/s recovering
Nov 29 06:22:26 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} v 0) v1
Nov 29 06:22:26 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Nov 29 06:22:26 compute-0 podman[98887]: 2025-11-29 06:22:26.225723075 +0000 UTC m=+0.101026175 container create aa246c832b00437fa3c5ea02a3dc25968be3511496f8752074384a9f12583ba3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_einstein, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 29 06:22:26 compute-0 podman[98887]: 2025-11-29 06:22:26.152958283 +0000 UTC m=+0.028261393 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:22:26 compute-0 systemd[1]: Started libpod-conmon-aa246c832b00437fa3c5ea02a3dc25968be3511496f8752074384a9f12583ba3.scope.
Nov 29 06:22:26 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:22:26 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 80 pg[9.18( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=5 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=80 pruub=8.407164574s) [2] r=-1 lpr=80 pi=[58,80)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active pruub 239.191238403s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:22:26 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 80 pg[9.18( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=5 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=80 pruub=8.407092094s) [2] r=-1 lpr=80 pi=[58,80)/1 crt=56'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 239.191238403s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 06:22:26 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 80 pg[9.8( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=6 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=80 pruub=8.406921387s) [2] r=-1 lpr=80 pi=[58,80)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active pruub 239.191238403s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:22:26 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 80 pg[9.8( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=6 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=80 pruub=8.406853676s) [2] r=-1 lpr=80 pi=[58,80)/1 crt=56'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 239.191238403s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 06:22:26 compute-0 podman[98887]: 2025-11-29 06:22:26.378047163 +0000 UTC m=+0.253350273 container init aa246c832b00437fa3c5ea02a3dc25968be3511496f8752074384a9f12583ba3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_einstein, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 29 06:22:26 compute-0 podman[98887]: 2025-11-29 06:22:26.385823506 +0000 UTC m=+0.261126586 container start aa246c832b00437fa3c5ea02a3dc25968be3511496f8752074384a9f12583ba3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_einstein, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 06:22:26 compute-0 podman[98887]: 2025-11-29 06:22:26.38941328 +0000 UTC m=+0.264716390 container attach aa246c832b00437fa3c5ea02a3dc25968be3511496f8752074384a9f12583ba3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_einstein, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 06:22:26 compute-0 musing_einstein[98903]: 167 167
Nov 29 06:22:26 compute-0 systemd[1]: libpod-aa246c832b00437fa3c5ea02a3dc25968be3511496f8752074384a9f12583ba3.scope: Deactivated successfully.
Nov 29 06:22:26 compute-0 podman[98887]: 2025-11-29 06:22:26.39323861 +0000 UTC m=+0.268541700 container died aa246c832b00437fa3c5ea02a3dc25968be3511496f8752074384a9f12583ba3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_einstein, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Nov 29 06:22:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-0f6fa6523a98bcaa0ee5e39fe3260bdbf66f9ea7f57591e0b9bc87de2cdd922f-merged.mount: Deactivated successfully.
Nov 29 06:22:26 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e80 do_prune osdmap full prune enabled
Nov 29 06:22:26 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:22:26 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 06:22:26 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Nov 29 06:22:26 compute-0 ceph-mon[74654]: osdmap e80: 3 total, 3 up, 3 in
Nov 29 06:22:26 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:22:26 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 06:22:26 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 06:22:26 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:22:26 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Nov 29 06:22:26 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:22:26 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:22:26 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:22:26.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:22:26 compute-0 podman[98887]: 2025-11-29 06:22:26.848780873 +0000 UTC m=+0.724083953 container remove aa246c832b00437fa3c5ea02a3dc25968be3511496f8752074384a9f12583ba3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_einstein, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 29 06:22:26 compute-0 systemd[1]: libpod-conmon-aa246c832b00437fa3c5ea02a3dc25968be3511496f8752074384a9f12583ba3.scope: Deactivated successfully.
Nov 29 06:22:27 compute-0 podman[98928]: 2025-11-29 06:22:27.042103679 +0000 UTC m=+0.053272482 container create 19f7299a5ce46e2ec58b033a5da810fcd530e85fdd640a51fc5d3431b9c0532b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_curie, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 29 06:22:27 compute-0 podman[98928]: 2025-11-29 06:22:27.011390816 +0000 UTC m=+0.022559639 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:22:27 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Nov 29 06:22:27 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e81 e81: 3 total, 3 up, 3 in
Nov 29 06:22:27 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e81: 3 total, 3 up, 3 in
Nov 29 06:22:27 compute-0 systemd[1]: Started libpod-conmon-19f7299a5ce46e2ec58b033a5da810fcd530e85fdd640a51fc5d3431b9c0532b.scope.
Nov 29 06:22:27 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:22:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae17ed59ca598e381a342e13c12b2245949339a5199b49c56c5b912d0e92afed/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 06:22:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae17ed59ca598e381a342e13c12b2245949339a5199b49c56c5b912d0e92afed/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:22:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae17ed59ca598e381a342e13c12b2245949339a5199b49c56c5b912d0e92afed/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:22:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae17ed59ca598e381a342e13c12b2245949339a5199b49c56c5b912d0e92afed/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 06:22:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae17ed59ca598e381a342e13c12b2245949339a5199b49c56c5b912d0e92afed/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 06:22:27 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 81 pg[9.9( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=6 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=81 pruub=15.454995155s) [2] r=-1 lpr=81 pi=[58,81)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active pruub 247.191528320s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:22:27 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 81 pg[9.19( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=5 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=81 pruub=15.419450760s) [2] r=-1 lpr=81 pi=[58,81)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active pruub 247.156372070s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:22:27 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 81 pg[9.19( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=5 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=81 pruub=15.419392586s) [2] r=-1 lpr=81 pi=[58,81)/1 crt=56'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 247.156372070s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 06:22:27 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 81 pg[9.9( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=6 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=81 pruub=15.454208374s) [2] r=-1 lpr=81 pi=[58,81)/1 crt=56'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 247.191528320s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 06:22:27 compute-0 podman[98928]: 2025-11-29 06:22:27.363379873 +0000 UTC m=+0.374548766 container init 19f7299a5ce46e2ec58b033a5da810fcd530e85fdd640a51fc5d3431b9c0532b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_curie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 06:22:27 compute-0 podman[98928]: 2025-11-29 06:22:27.370254901 +0000 UTC m=+0.381423734 container start 19f7299a5ce46e2ec58b033a5da810fcd530e85fdd640a51fc5d3431b9c0532b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_curie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 06:22:27 compute-0 podman[98928]: 2025-11-29 06:22:27.451427224 +0000 UTC m=+0.462596107 container attach 19f7299a5ce46e2ec58b033a5da810fcd530e85fdd640a51fc5d3431b9c0532b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_curie, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 29 06:22:27 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:22:27 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:22:27 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:22:27.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:22:27 compute-0 ceph-mon[74654]: pgmap v241: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 46 B/s, 1 objects/s recovering
Nov 29 06:22:27 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Nov 29 06:22:27 compute-0 ceph-mon[74654]: osdmap e81: 3 total, 3 up, 3 in
Nov 29 06:22:28 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v243: 305 pgs: 305 active+clean; 456 KiB data, 104 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:22:28 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} v 0) v1
Nov 29 06:22:28 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Nov 29 06:22:28 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e81 do_prune osdmap full prune enabled
Nov 29 06:22:28 compute-0 distracted_curie[98945]: --> passed data devices: 0 physical, 1 LVM
Nov 29 06:22:28 compute-0 distracted_curie[98945]: --> relative data size: 1.0
Nov 29 06:22:28 compute-0 distracted_curie[98945]: --> All data devices are unavailable
Nov 29 06:22:28 compute-0 systemd[1]: libpod-19f7299a5ce46e2ec58b033a5da810fcd530e85fdd640a51fc5d3431b9c0532b.scope: Deactivated successfully.
Nov 29 06:22:28 compute-0 podman[98928]: 2025-11-29 06:22:28.281212713 +0000 UTC m=+1.292381546 container died 19f7299a5ce46e2ec58b033a5da810fcd530e85fdd640a51fc5d3431b9c0532b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_curie, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 06:22:28 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Nov 29 06:22:28 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e82 e82: 3 total, 3 up, 3 in
Nov 29 06:22:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-ae17ed59ca598e381a342e13c12b2245949339a5199b49c56c5b912d0e92afed-merged.mount: Deactivated successfully.
Nov 29 06:22:28 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e82: 3 total, 3 up, 3 in
Nov 29 06:22:28 compute-0 podman[98928]: 2025-11-29 06:22:28.489857781 +0000 UTC m=+1.501026634 container remove 19f7299a5ce46e2ec58b033a5da810fcd530e85fdd640a51fc5d3431b9c0532b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_curie, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 06:22:28 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 82 pg[9.19( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=5 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=82) [2]/[1] r=0 lpr=82 pi=[58,82)/1 crt=56'1130 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:22:28 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 82 pg[9.8( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=6 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=82) [2]/[1] r=0 lpr=82 pi=[58,82)/1 crt=56'1130 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:22:28 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 82 pg[9.19( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=5 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=82) [2]/[1] r=0 lpr=82 pi=[58,82)/1 crt=56'1130 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 06:22:28 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 82 pg[9.9( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=6 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=82) [2]/[1] r=0 lpr=82 pi=[58,82)/1 crt=56'1130 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:22:28 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 82 pg[9.8( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=6 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=82) [2]/[1] r=0 lpr=82 pi=[58,82)/1 crt=56'1130 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 06:22:28 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 82 pg[9.9( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=6 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=82) [2]/[1] r=0 lpr=82 pi=[58,82)/1 crt=56'1130 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 06:22:28 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 82 pg[9.18( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=5 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=82) [2]/[1] r=0 lpr=82 pi=[58,82)/1 crt=56'1130 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:22:28 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 82 pg[9.18( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=5 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=82) [2]/[1] r=0 lpr=82 pi=[58,82)/1 crt=56'1130 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 06:22:28 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 82 pg[9.1a( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=5 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=82 pruub=14.274819374s) [0] r=-1 lpr=82 pi=[58,82)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active pruub 247.191848755s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:22:28 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 82 pg[9.a( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=6 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=82 pruub=14.274504662s) [0] r=-1 lpr=82 pi=[58,82)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active pruub 247.191528320s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:22:28 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 82 pg[9.1a( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=5 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=82 pruub=14.274509430s) [0] r=-1 lpr=82 pi=[58,82)/1 crt=56'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 247.191848755s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 06:22:28 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 82 pg[9.a( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=6 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=82 pruub=14.274133682s) [0] r=-1 lpr=82 pi=[58,82)/1 crt=56'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 247.191528320s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 06:22:28 compute-0 systemd[1]: libpod-conmon-19f7299a5ce46e2ec58b033a5da810fcd530e85fdd640a51fc5d3431b9c0532b.scope: Deactivated successfully.
Nov 29 06:22:28 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e82 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 06:22:28 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e82 do_prune osdmap full prune enabled
Nov 29 06:22:28 compute-0 sudo[98822]: pam_unix(sudo:session): session closed for user root
Nov 29 06:22:28 compute-0 sudo[98979]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:22:28 compute-0 sudo[98979]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:22:28 compute-0 sudo[98979]: pam_unix(sudo:session): session closed for user root
Nov 29 06:22:28 compute-0 sudo[99004]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:22:28 compute-0 sudo[99004]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:22:28 compute-0 sudo[99004]: pam_unix(sudo:session): session closed for user root
Nov 29 06:22:28 compute-0 sudo[99029]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:22:28 compute-0 sudo[99029]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:22:28 compute-0 sudo[99029]: pam_unix(sudo:session): session closed for user root
Nov 29 06:22:28 compute-0 sudo[99054]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -- lvm list --format json
Nov 29 06:22:28 compute-0 sudo[99054]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:22:28 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:22:28 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:22:28 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:22:28.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:22:28 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e83 e83: 3 total, 3 up, 3 in
Nov 29 06:22:28 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e83: 3 total, 3 up, 3 in
Nov 29 06:22:28 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 83 pg[9.1a( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=5 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=83) [0]/[1] r=0 lpr=83 pi=[58,83)/1 crt=56'1130 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:22:28 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 83 pg[9.1a( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=5 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=83) [0]/[1] r=0 lpr=83 pi=[58,83)/1 crt=56'1130 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 06:22:28 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 83 pg[9.a( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=6 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=83) [0]/[1] r=0 lpr=83 pi=[58,83)/1 crt=56'1130 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:22:28 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 83 pg[9.a( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=6 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=83) [0]/[1] r=0 lpr=83 pi=[58,83)/1 crt=56'1130 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 06:22:29 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 83 pg[9.9( v 56'1130 (0'0,56'1130] local-lis/les=82/83 n=6 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=82) [2]/[1] async=[2] r=0 lpr=82 pi=[58,82)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:22:29 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 83 pg[9.18( v 56'1130 (0'0,56'1130] local-lis/les=82/83 n=5 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=82) [2]/[1] async=[2] r=0 lpr=82 pi=[58,82)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:22:29 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 83 pg[9.8( v 56'1130 (0'0,56'1130] local-lis/les=82/83 n=6 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=82) [2]/[1] async=[2] r=0 lpr=82 pi=[58,82)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:22:29 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 83 pg[9.19( v 56'1130 (0'0,56'1130] local-lis/les=82/83 n=5 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=82) [2]/[1] async=[2] r=0 lpr=82 pi=[58,82)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:22:29 compute-0 ceph-mon[74654]: pgmap v243: 305 pgs: 305 active+clean; 456 KiB data, 104 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:22:29 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Nov 29 06:22:29 compute-0 ceph-mon[74654]: 3.d scrub starts
Nov 29 06:22:29 compute-0 ceph-mon[74654]: 3.d scrub ok
Nov 29 06:22:29 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Nov 29 06:22:29 compute-0 ceph-mon[74654]: osdmap e82: 3 total, 3 up, 3 in
Nov 29 06:22:29 compute-0 podman[99119]: 2025-11-29 06:22:29.155645675 +0000 UTC m=+0.055147496 container create 7239a10072ab86a9591e07a56afcaf8f17212a7854440619b242862e9c3685e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_mclaren, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 29 06:22:29 compute-0 systemd[1]: Started libpod-conmon-7239a10072ab86a9591e07a56afcaf8f17212a7854440619b242862e9c3685e5.scope.
Nov 29 06:22:29 compute-0 podman[99119]: 2025-11-29 06:22:29.131140751 +0000 UTC m=+0.030642592 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:22:29 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:22:29 compute-0 podman[99119]: 2025-11-29 06:22:29.274744218 +0000 UTC m=+0.174246039 container init 7239a10072ab86a9591e07a56afcaf8f17212a7854440619b242862e9c3685e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_mclaren, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 29 06:22:29 compute-0 podman[99119]: 2025-11-29 06:22:29.283057567 +0000 UTC m=+0.182559378 container start 7239a10072ab86a9591e07a56afcaf8f17212a7854440619b242862e9c3685e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_mclaren, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 29 06:22:29 compute-0 tender_mclaren[99135]: 167 167
Nov 29 06:22:29 compute-0 podman[99119]: 2025-11-29 06:22:29.29148428 +0000 UTC m=+0.190986091 container attach 7239a10072ab86a9591e07a56afcaf8f17212a7854440619b242862e9c3685e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_mclaren, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 29 06:22:29 compute-0 systemd[1]: libpod-7239a10072ab86a9591e07a56afcaf8f17212a7854440619b242862e9c3685e5.scope: Deactivated successfully.
Nov 29 06:22:29 compute-0 podman[99119]: 2025-11-29 06:22:29.293283681 +0000 UTC m=+0.192785502 container died 7239a10072ab86a9591e07a56afcaf8f17212a7854440619b242862e9c3685e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_mclaren, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 29 06:22:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-8234eb08ca3e82fa3c775e9787e1c83e343253caa637b471d5aab86ca868a17e-merged.mount: Deactivated successfully.
Nov 29 06:22:29 compute-0 podman[99119]: 2025-11-29 06:22:29.426166671 +0000 UTC m=+0.325668482 container remove 7239a10072ab86a9591e07a56afcaf8f17212a7854440619b242862e9c3685e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_mclaren, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 29 06:22:29 compute-0 systemd[1]: libpod-conmon-7239a10072ab86a9591e07a56afcaf8f17212a7854440619b242862e9c3685e5.scope: Deactivated successfully.
Nov 29 06:22:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 06:22:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 06:22:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 06:22:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 06:22:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 06:22:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 06:22:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 06:22:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 06:22:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 06:22:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 06:22:29 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:22:29 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:22:29 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:22:29.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:22:29 compute-0 podman[99161]: 2025-11-29 06:22:29.598758621 +0000 UTC m=+0.037435927 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:22:29 compute-0 podman[99161]: 2025-11-29 06:22:29.840674974 +0000 UTC m=+0.279352250 container create 28be7c59b6d057e8f78353c99749647468e3d3acbf9fd785cd90473e465c6e32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_diffie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True)
Nov 29 06:22:29 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e83 do_prune osdmap full prune enabled
Nov 29 06:22:30 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v246: 305 pgs: 4 unknown, 301 active+clean; 456 KiB data, 104 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:22:30 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e84 e84: 3 total, 3 up, 3 in
Nov 29 06:22:30 compute-0 systemd[1]: Started libpod-conmon-28be7c59b6d057e8f78353c99749647468e3d3acbf9fd785cd90473e465c6e32.scope.
Nov 29 06:22:30 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:22:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f603e58711c1bb1d0463336c1c908f0b63417a9d6cf4f3ed742d603ee63d542f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 06:22:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f603e58711c1bb1d0463336c1c908f0b63417a9d6cf4f3ed742d603ee63d542f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:22:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f603e58711c1bb1d0463336c1c908f0b63417a9d6cf4f3ed742d603ee63d542f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:22:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f603e58711c1bb1d0463336c1c908f0b63417a9d6cf4f3ed742d603ee63d542f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 06:22:30 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:22:30 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:22:30 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:22:30.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:22:30 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e84: 3 total, 3 up, 3 in
Nov 29 06:22:31 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:22:31 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:22:31 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:22:31.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:22:31 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e84 do_prune osdmap full prune enabled
Nov 29 06:22:31 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 84 pg[9.9( v 56'1130 (0'0,56'1130] local-lis/les=82/83 n=6 ec=58/47 lis/c=82/58 les/c/f=83/59/0 sis=84 pruub=13.403115273s) [2] async=[2] r=-1 lpr=84 pi=[58,84)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active pruub 249.484466553s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:22:31 compute-0 podman[99161]: 2025-11-29 06:22:31.657419451 +0000 UTC m=+2.096096747 container init 28be7c59b6d057e8f78353c99749647468e3d3acbf9fd785cd90473e465c6e32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_diffie, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 06:22:31 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 84 pg[9.9( v 56'1130 (0'0,56'1130] local-lis/les=82/83 n=6 ec=58/47 lis/c=82/58 les/c/f=83/59/0 sis=84 pruub=13.402838707s) [2] r=-1 lpr=84 pi=[58,84)/1 crt=56'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 249.484466553s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 06:22:31 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 84 pg[9.19( v 56'1130 (0'0,56'1130] local-lis/les=82/83 n=5 ec=58/47 lis/c=82/58 les/c/f=83/59/0 sis=84 pruub=13.406016350s) [2] async=[2] r=-1 lpr=84 pi=[58,84)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active pruub 249.488967896s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:22:31 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 84 pg[9.19( v 56'1130 (0'0,56'1130] local-lis/les=82/83 n=5 ec=58/47 lis/c=82/58 les/c/f=83/59/0 sis=84 pruub=13.405915260s) [2] r=-1 lpr=84 pi=[58,84)/1 crt=56'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 249.488967896s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 06:22:31 compute-0 podman[99161]: 2025-11-29 06:22:31.665340759 +0000 UTC m=+2.104018035 container start 28be7c59b6d057e8f78353c99749647468e3d3acbf9fd785cd90473e465c6e32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_diffie, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 06:22:32 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v248: 305 pgs: 4 active+remapped, 2 remapped+peering, 299 active+clean; 456 KiB data, 104 MiB used, 21 GiB / 21 GiB avail; 112 B/s, 4 objects/s recovering
Nov 29 06:22:32 compute-0 keen_diffie[99177]: {
Nov 29 06:22:32 compute-0 keen_diffie[99177]:     "1": [
Nov 29 06:22:32 compute-0 keen_diffie[99177]:         {
Nov 29 06:22:32 compute-0 keen_diffie[99177]:             "devices": [
Nov 29 06:22:32 compute-0 keen_diffie[99177]:                 "/dev/loop3"
Nov 29 06:22:32 compute-0 keen_diffie[99177]:             ],
Nov 29 06:22:32 compute-0 keen_diffie[99177]:             "lv_name": "ceph_lv0",
Nov 29 06:22:32 compute-0 keen_diffie[99177]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 06:22:32 compute-0 keen_diffie[99177]:             "lv_size": "7511998464",
Nov 29 06:22:32 compute-0 keen_diffie[99177]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=336ec58c-893b-528f-a0c1-6ed1196bc047,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=91f280f1-e534-4adc-bf70-98711580c2dd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 06:22:32 compute-0 keen_diffie[99177]:             "lv_uuid": "G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP",
Nov 29 06:22:32 compute-0 keen_diffie[99177]:             "name": "ceph_lv0",
Nov 29 06:22:32 compute-0 keen_diffie[99177]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 06:22:32 compute-0 keen_diffie[99177]:             "tags": {
Nov 29 06:22:32 compute-0 keen_diffie[99177]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 06:22:32 compute-0 keen_diffie[99177]:                 "ceph.block_uuid": "G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP",
Nov 29 06:22:32 compute-0 keen_diffie[99177]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 06:22:32 compute-0 keen_diffie[99177]:                 "ceph.cluster_fsid": "336ec58c-893b-528f-a0c1-6ed1196bc047",
Nov 29 06:22:32 compute-0 keen_diffie[99177]:                 "ceph.cluster_name": "ceph",
Nov 29 06:22:32 compute-0 keen_diffie[99177]:                 "ceph.crush_device_class": "",
Nov 29 06:22:32 compute-0 keen_diffie[99177]:                 "ceph.encrypted": "0",
Nov 29 06:22:32 compute-0 keen_diffie[99177]:                 "ceph.osd_fsid": "91f280f1-e534-4adc-bf70-98711580c2dd",
Nov 29 06:22:32 compute-0 keen_diffie[99177]:                 "ceph.osd_id": "1",
Nov 29 06:22:32 compute-0 keen_diffie[99177]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 06:22:32 compute-0 keen_diffie[99177]:                 "ceph.type": "block",
Nov 29 06:22:32 compute-0 keen_diffie[99177]:                 "ceph.vdo": "0"
Nov 29 06:22:32 compute-0 keen_diffie[99177]:             },
Nov 29 06:22:32 compute-0 keen_diffie[99177]:             "type": "block",
Nov 29 06:22:32 compute-0 keen_diffie[99177]:             "vg_name": "ceph_vg0"
Nov 29 06:22:32 compute-0 keen_diffie[99177]:         }
Nov 29 06:22:32 compute-0 keen_diffie[99177]:     ]
Nov 29 06:22:32 compute-0 keen_diffie[99177]: }
Nov 29 06:22:32 compute-0 systemd[1]: libpod-28be7c59b6d057e8f78353c99749647468e3d3acbf9fd785cd90473e465c6e32.scope: Deactivated successfully.
Nov 29 06:22:32 compute-0 systemd[1]: libpod-28be7c59b6d057e8f78353c99749647468e3d3acbf9fd785cd90473e465c6e32.scope: Consumed 1.016s CPU time.
Nov 29 06:22:32 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:22:32 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:22:32 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:22:32.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:22:33 compute-0 sshd-session[99183]: Invalid user deploy from 79.116.35.29 port 48936
Nov 29 06:22:33 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 3.6 scrub starts
Nov 29 06:22:33 compute-0 sshd-session[99183]: Received disconnect from 79.116.35.29 port 48936:11: Bye Bye [preauth]
Nov 29 06:22:33 compute-0 sshd-session[99183]: Disconnected from invalid user deploy 79.116.35.29 port 48936 [preauth]
Nov 29 06:22:33 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:22:33 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:22:33 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:22:33.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:22:34 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v249: 305 pgs: 2 peering, 2 active+remapped, 2 remapped+peering, 299 active+clean; 456 KiB data, 104 MiB used, 21 GiB / 21 GiB avail; 36 B/s, 1 objects/s recovering
Nov 29 06:22:34 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 5.3 scrub starts
Nov 29 06:22:34 compute-0 podman[99161]: 2025-11-29 06:22:34.427517848 +0000 UTC m=+4.866195224 container attach 28be7c59b6d057e8f78353c99749647468e3d3acbf9fd785cd90473e465c6e32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_diffie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 29 06:22:34 compute-0 podman[99161]: 2025-11-29 06:22:34.428867006 +0000 UTC m=+4.867544322 container died 28be7c59b6d057e8f78353c99749647468e3d3acbf9fd785cd90473e465c6e32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_diffie, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 06:22:34 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 84 pg[9.a( v 56'1130 (0'0,56'1130] local-lis/les=83/84 n=6 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=83) [0]/[1] async=[0] r=0 lpr=83 pi=[58,83)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:22:34 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 3.6 scrub ok
Nov 29 06:22:34 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 5.3 scrub ok
Nov 29 06:22:34 compute-0 ceph-mon[74654]: osdmap e83: 3 total, 3 up, 3 in
Nov 29 06:22:34 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 84 pg[9.1a( v 56'1130 (0'0,56'1130] local-lis/les=83/84 n=5 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=83) [0]/[1] async=[0] r=0 lpr=83 pi=[58,83)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:22:34 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:22:34 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:22:34 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:22:34.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:22:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-f603e58711c1bb1d0463336c1c908f0b63417a9d6cf4f3ed742d603ee63d542f-merged.mount: Deactivated successfully.
Nov 29 06:22:35 compute-0 podman[99161]: 2025-11-29 06:22:35.193132113 +0000 UTC m=+5.631809409 container remove 28be7c59b6d057e8f78353c99749647468e3d3acbf9fd785cd90473e465c6e32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_diffie, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True)
Nov 29 06:22:35 compute-0 systemd[1]: libpod-conmon-28be7c59b6d057e8f78353c99749647468e3d3acbf9fd785cd90473e465c6e32.scope: Deactivated successfully.
Nov 29 06:22:35 compute-0 sudo[99054]: pam_unix(sudo:session): session closed for user root
Nov 29 06:22:35 compute-0 sudo[99205]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:22:35 compute-0 sudo[99205]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:22:35 compute-0 sudo[99205]: pam_unix(sudo:session): session closed for user root
Nov 29 06:22:35 compute-0 sudo[99230]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:22:35 compute-0 sudo[99230]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:22:35 compute-0 sudo[99230]: pam_unix(sudo:session): session closed for user root
Nov 29 06:22:35 compute-0 sudo[99255]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:22:35 compute-0 sudo[99255]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:22:35 compute-0 sudo[99255]: pam_unix(sudo:session): session closed for user root
Nov 29 06:22:35 compute-0 sudo[99280]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -- raw list --format json
Nov 29 06:22:35 compute-0 sudo[99280]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:22:35 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:22:35 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:22:35 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:22:35.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:22:35 compute-0 podman[99352]: 2025-11-29 06:22:35.86210338 +0000 UTC m=+0.029149349 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:22:35 compute-0 podman[99352]: 2025-11-29 06:22:35.96614225 +0000 UTC m=+0.133188209 container create 2391eafeba9734bb8f28ded802f53c403f430f4c48482338cc6b55280aa8108c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_cohen, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 06:22:36 compute-0 systemd[1]: Started libpod-conmon-2391eafeba9734bb8f28ded802f53c403f430f4c48482338cc6b55280aa8108c.scope.
Nov 29 06:22:36 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:22:36 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v250: 305 pgs: 2 peering, 4 active+remapped, 299 active+clean; 456 KiB data, 104 MiB used, 21 GiB / 21 GiB avail; 26 KiB/s rd, 530 B/s wr, 47 op/s; 56 B/s, 3 objects/s recovering
Nov 29 06:22:36 compute-0 podman[99352]: 2025-11-29 06:22:36.107981296 +0000 UTC m=+0.275027275 container init 2391eafeba9734bb8f28ded802f53c403f430f4c48482338cc6b55280aa8108c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_cohen, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 29 06:22:36 compute-0 podman[99352]: 2025-11-29 06:22:36.118442277 +0000 UTC m=+0.285488236 container start 2391eafeba9734bb8f28ded802f53c403f430f4c48482338cc6b55280aa8108c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_cohen, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 06:22:36 compute-0 zen_cohen[99368]: 167 167
Nov 29 06:22:36 compute-0 systemd[1]: libpod-2391eafeba9734bb8f28ded802f53c403f430f4c48482338cc6b55280aa8108c.scope: Deactivated successfully.
Nov 29 06:22:36 compute-0 podman[99352]: 2025-11-29 06:22:36.125587022 +0000 UTC m=+0.292632981 container attach 2391eafeba9734bb8f28ded802f53c403f430f4c48482338cc6b55280aa8108c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_cohen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 06:22:36 compute-0 podman[99352]: 2025-11-29 06:22:36.125996674 +0000 UTC m=+0.293042633 container died 2391eafeba9734bb8f28ded802f53c403f430f4c48482338cc6b55280aa8108c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_cohen, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 29 06:22:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-03bad5bd9e51593b12fdfd852e6484b69f72177503b6a6bbecaf9126ea0796bd-merged.mount: Deactivated successfully.
Nov 29 06:22:36 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:22:36 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:22:36 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:22:36.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:22:37 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e85 e85: 3 total, 3 up, 3 in
Nov 29 06:22:37 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e85: 3 total, 3 up, 3 in
Nov 29 06:22:37 compute-0 podman[99352]: 2025-11-29 06:22:37.21834861 +0000 UTC m=+1.385394609 container remove 2391eafeba9734bb8f28ded802f53c403f430f4c48482338cc6b55280aa8108c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_cohen, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 06:22:37 compute-0 systemd[1]: libpod-conmon-2391eafeba9734bb8f28ded802f53c403f430f4c48482338cc6b55280aa8108c.scope: Deactivated successfully.
Nov 29 06:22:37 compute-0 podman[99391]: 2025-11-29 06:22:37.398461407 +0000 UTC m=+0.027763429 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:22:37 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 85 pg[9.8( v 56'1130 (0'0,56'1130] local-lis/les=82/83 n=6 ec=58/47 lis/c=82/58 les/c/f=83/59/0 sis=85 pruub=15.455229759s) [2] async=[2] r=-1 lpr=85 pi=[58,85)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active pruub 257.489105225s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:22:37 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 85 pg[9.18( v 56'1130 (0'0,56'1130] local-lis/les=82/83 n=5 ec=58/47 lis/c=82/58 les/c/f=83/59/0 sis=85 pruub=15.455081940s) [2] async=[2] r=-1 lpr=85 pi=[58,85)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active pruub 257.489105225s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:22:37 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 85 pg[9.8( v 56'1130 (0'0,56'1130] local-lis/les=82/83 n=6 ec=58/47 lis/c=82/58 les/c/f=83/59/0 sis=85 pruub=15.455101013s) [2] r=-1 lpr=85 pi=[58,85)/1 crt=56'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 257.489105225s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 06:22:37 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 85 pg[9.18( v 56'1130 (0'0,56'1130] local-lis/les=82/83 n=5 ec=58/47 lis/c=82/58 les/c/f=83/59/0 sis=85 pruub=15.454952240s) [2] r=-1 lpr=85 pi=[58,85)/1 crt=56'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 257.489105225s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 06:22:37 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:22:37 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:22:37 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:22:37.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:22:37 compute-0 podman[99391]: 2025-11-29 06:22:37.821529517 +0000 UTC m=+0.450831519 container create 4397ea6a1c27eea754d751539b6312923ea1e712d6cf32fef852a370fba6631c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_sanderson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 06:22:38 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v252: 305 pgs: 2 peering, 4 active+remapped, 299 active+clean; 456 KiB data, 121 MiB used, 21 GiB / 21 GiB avail; 25 KiB/s rd, 511 B/s wr, 45 op/s; 54 B/s, 2 objects/s recovering
Nov 29 06:22:38 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e85 do_prune osdmap full prune enabled
Nov 29 06:22:38 compute-0 ceph-mon[74654]: pgmap v246: 305 pgs: 4 unknown, 301 active+clean; 456 KiB data, 104 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:22:38 compute-0 ceph-mon[74654]: osdmap e84: 3 total, 3 up, 3 in
Nov 29 06:22:38 compute-0 ceph-mon[74654]: pgmap v248: 305 pgs: 4 active+remapped, 2 remapped+peering, 299 active+clean; 456 KiB data, 104 MiB used, 21 GiB / 21 GiB avail; 112 B/s, 4 objects/s recovering
Nov 29 06:22:38 compute-0 ceph-mon[74654]: 3.6 scrub starts
Nov 29 06:22:38 compute-0 ceph-mon[74654]: pgmap v249: 305 pgs: 2 peering, 2 active+remapped, 2 remapped+peering, 299 active+clean; 456 KiB data, 104 MiB used, 21 GiB / 21 GiB avail; 36 B/s, 1 objects/s recovering
Nov 29 06:22:38 compute-0 ceph-mon[74654]: 5.3 scrub starts
Nov 29 06:22:38 compute-0 ceph-mon[74654]: 3.6 scrub ok
Nov 29 06:22:38 compute-0 ceph-mon[74654]: 5.3 scrub ok
Nov 29 06:22:38 compute-0 systemd[1]: Started libpod-conmon-4397ea6a1c27eea754d751539b6312923ea1e712d6cf32fef852a370fba6631c.scope.
Nov 29 06:22:38 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:22:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10cfbaf5de727c4a0adb2261ba42edeb0ccdb72ede96db860d8824fce79c2ee6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 06:22:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10cfbaf5de727c4a0adb2261ba42edeb0ccdb72ede96db860d8824fce79c2ee6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:22:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10cfbaf5de727c4a0adb2261ba42edeb0ccdb72ede96db860d8824fce79c2ee6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:22:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10cfbaf5de727c4a0adb2261ba42edeb0ccdb72ede96db860d8824fce79c2ee6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 06:22:38 compute-0 podman[99391]: 2025-11-29 06:22:38.515213194 +0000 UTC m=+1.144515256 container init 4397ea6a1c27eea754d751539b6312923ea1e712d6cf32fef852a370fba6631c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_sanderson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 29 06:22:38 compute-0 podman[99391]: 2025-11-29 06:22:38.528121025 +0000 UTC m=+1.157423027 container start 4397ea6a1c27eea754d751539b6312923ea1e712d6cf32fef852a370fba6631c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_sanderson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 06:22:38 compute-0 sudo[99417]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:22:38 compute-0 sudo[99417]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:22:38 compute-0 sudo[99417]: pam_unix(sudo:session): session closed for user root
Nov 29 06:22:38 compute-0 podman[99391]: 2025-11-29 06:22:38.656154945 +0000 UTC m=+1.285456977 container attach 4397ea6a1c27eea754d751539b6312923ea1e712d6cf32fef852a370fba6631c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_sanderson, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 06:22:38 compute-0 sudo[99442]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:22:38 compute-0 sudo[99442]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:22:38 compute-0 sudo[99442]: pam_unix(sudo:session): session closed for user root
Nov 29 06:22:38 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:22:38 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:22:38 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:22:38.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:22:39 compute-0 charming_sanderson[99411]: {
Nov 29 06:22:39 compute-0 charming_sanderson[99411]:     "91f280f1-e534-4adc-bf70-98711580c2dd": {
Nov 29 06:22:39 compute-0 charming_sanderson[99411]:         "ceph_fsid": "336ec58c-893b-528f-a0c1-6ed1196bc047",
Nov 29 06:22:39 compute-0 charming_sanderson[99411]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 06:22:39 compute-0 charming_sanderson[99411]:         "osd_id": 1,
Nov 29 06:22:39 compute-0 charming_sanderson[99411]:         "osd_uuid": "91f280f1-e534-4adc-bf70-98711580c2dd",
Nov 29 06:22:39 compute-0 charming_sanderson[99411]:         "type": "bluestore"
Nov 29 06:22:39 compute-0 charming_sanderson[99411]:     }
Nov 29 06:22:39 compute-0 charming_sanderson[99411]: }
Nov 29 06:22:39 compute-0 systemd[1]: libpod-4397ea6a1c27eea754d751539b6312923ea1e712d6cf32fef852a370fba6631c.scope: Deactivated successfully.
Nov 29 06:22:39 compute-0 podman[99391]: 2025-11-29 06:22:39.45724632 +0000 UTC m=+2.086548372 container died 4397ea6a1c27eea754d751539b6312923ea1e712d6cf32fef852a370fba6631c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_sanderson, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 06:22:39 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:22:39 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:22:39 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:22:39.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:22:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-10cfbaf5de727c4a0adb2261ba42edeb0ccdb72ede96db860d8824fce79c2ee6-merged.mount: Deactivated successfully.
Nov 29 06:22:40 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v253: 305 pgs: 2 peering, 2 active+remapped, 301 active+clean; 456 KiB data, 121 MiB used, 21 GiB / 21 GiB avail; 0 B/s, 0 objects/s recovering
Nov 29 06:22:40 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 5.6 scrub starts
Nov 29 06:22:40 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 5.6 scrub ok
Nov 29 06:22:40 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:22:40 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:22:40 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:22:40.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:22:40 compute-0 podman[99391]: 2025-11-29 06:22:40.90116336 +0000 UTC m=+3.530465412 container remove 4397ea6a1c27eea754d751539b6312923ea1e712d6cf32fef852a370fba6631c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_sanderson, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 29 06:22:40 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e86 e86: 3 total, 3 up, 3 in
Nov 29 06:22:40 compute-0 systemd[1]: libpod-conmon-4397ea6a1c27eea754d751539b6312923ea1e712d6cf32fef852a370fba6631c.scope: Deactivated successfully.
Nov 29 06:22:40 compute-0 sudo[99280]: pam_unix(sudo:session): session closed for user root
Nov 29 06:22:41 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e86: 3 total, 3 up, 3 in
Nov 29 06:22:41 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 06:22:41 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 86 pg[9.1a( v 56'1130 (0'0,56'1130] local-lis/les=83/84 n=5 ec=58/47 lis/c=83/58 les/c/f=84/59/0 sis=86 pruub=9.302009583s) [0] async=[0] r=-1 lpr=86 pi=[58,86)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active pruub 255.009017944s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:22:41 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 86 pg[9.1a( v 56'1130 (0'0,56'1130] local-lis/les=83/84 n=5 ec=58/47 lis/c=83/58 les/c/f=84/59/0 sis=86 pruub=9.301831245s) [0] r=-1 lpr=86 pi=[58,86)/1 crt=56'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 255.009017944s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 06:22:41 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 86 pg[9.a( v 56'1130 (0'0,56'1130] local-lis/les=83/84 n=6 ec=58/47 lis/c=83/58 les/c/f=84/59/0 sis=86 pruub=9.193807602s) [0] async=[0] r=-1 lpr=86 pi=[58,86)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active pruub 254.901672363s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:22:41 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 86 pg[9.a( v 56'1130 (0'0,56'1130] local-lis/les=83/84 n=6 ec=58/47 lis/c=83/58 les/c/f=84/59/0 sis=86 pruub=9.193541527s) [0] r=-1 lpr=86 pi=[58,86)/1 crt=56'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 254.901672363s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 06:22:41 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:22:41 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:22:41 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:22:41.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:22:42 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v255: 305 pgs: 4 peering, 301 active+clean; 456 KiB data, 121 MiB used, 21 GiB / 21 GiB avail; 41 B/s, 1 objects/s recovering
Nov 29 06:22:42 compute-0 ceph-mon[74654]: 4.15 scrub starts
Nov 29 06:22:42 compute-0 ceph-mon[74654]: 4.15 scrub ok
Nov 29 06:22:42 compute-0 ceph-mon[74654]: 4.1f scrub starts
Nov 29 06:22:42 compute-0 ceph-mon[74654]: 4.1f scrub ok
Nov 29 06:22:42 compute-0 ceph-mon[74654]: 4.1c scrub starts
Nov 29 06:22:42 compute-0 ceph-mon[74654]: 4.1c scrub ok
Nov 29 06:22:42 compute-0 ceph-mon[74654]: pgmap v250: 305 pgs: 2 peering, 4 active+remapped, 299 active+clean; 456 KiB data, 104 MiB used, 21 GiB / 21 GiB avail; 26 KiB/s rd, 530 B/s wr, 47 op/s; 56 B/s, 3 objects/s recovering
Nov 29 06:22:42 compute-0 ceph-mon[74654]: osdmap e85: 3 total, 3 up, 3 in
Nov 29 06:22:42 compute-0 ceph-mon[74654]: 10.4 scrub starts
Nov 29 06:22:42 compute-0 ceph-mon[74654]: 10.4 scrub ok
Nov 29 06:22:42 compute-0 ceph-mon[74654]: pgmap v252: 305 pgs: 2 peering, 4 active+remapped, 299 active+clean; 456 KiB data, 121 MiB used, 21 GiB / 21 GiB avail; 25 KiB/s rd, 511 B/s wr, 45 op/s; 54 B/s, 2 objects/s recovering
Nov 29 06:22:42 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:22:42 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.002000059s ======
Nov 29 06:22:42 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:22:42.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000059s
Nov 29 06:22:42 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e86 do_prune osdmap full prune enabled
Nov 29 06:22:43 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 3.2 scrub starts
Nov 29 06:22:43 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 3.2 scrub ok
Nov 29 06:22:43 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:22:43 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:22:43 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:22:43.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:22:43 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:22:43 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 06:22:44 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v256: 305 pgs: 2 peering, 303 active+clean; 456 KiB data, 121 MiB used, 21 GiB / 21 GiB avail; 41 B/s, 1 objects/s recovering
Nov 29 06:22:44 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e87 e87: 3 total, 3 up, 3 in
Nov 29 06:22:44 compute-0 ceph-mon[74654]: 11.3 scrub starts
Nov 29 06:22:44 compute-0 ceph-mon[74654]: 11.3 scrub ok
Nov 29 06:22:44 compute-0 ceph-mon[74654]: 8.6 scrub starts
Nov 29 06:22:44 compute-0 ceph-mon[74654]: 8.6 scrub ok
Nov 29 06:22:44 compute-0 ceph-mon[74654]: pgmap v253: 305 pgs: 2 peering, 2 active+remapped, 301 active+clean; 456 KiB data, 121 MiB used, 21 GiB / 21 GiB avail; 0 B/s, 0 objects/s recovering
Nov 29 06:22:44 compute-0 ceph-mon[74654]: 5.6 scrub starts
Nov 29 06:22:44 compute-0 ceph-mon[74654]: 5.6 scrub ok
Nov 29 06:22:44 compute-0 ceph-mon[74654]: osdmap e86: 3 total, 3 up, 3 in
Nov 29 06:22:44 compute-0 ceph-mon[74654]: 10.f deep-scrub starts
Nov 29 06:22:44 compute-0 ceph-mon[74654]: 10.f deep-scrub ok
Nov 29 06:22:44 compute-0 ceph-mon[74654]: pgmap v255: 305 pgs: 4 peering, 301 active+clean; 456 KiB data, 121 MiB used, 21 GiB / 21 GiB avail; 41 B/s, 1 objects/s recovering
Nov 29 06:22:44 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e87: 3 total, 3 up, 3 in
Nov 29 06:22:44 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:22:44 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev a460cd3d-4bbf-4556-a1b6-57f8cc8d048e does not exist
Nov 29 06:22:44 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev 48a3ec61-0af0-475d-8b6a-93cda3a0dca9 does not exist
Nov 29 06:22:44 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev 2079cd49-7755-4b35-a6e3-4391fd914b00 does not exist
Nov 29 06:22:44 compute-0 sudo[99538]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:22:44 compute-0 sudo[99538]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:22:44 compute-0 sudo[99538]: pam_unix(sudo:session): session closed for user root
Nov 29 06:22:44 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 06:22:44 compute-0 sudo[99563]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 06:22:44 compute-0 sudo[99563]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:22:44 compute-0 sudo[99563]: pam_unix(sudo:session): session closed for user root
Nov 29 06:22:44 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:22:44 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:22:44 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:22:44.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:22:45 compute-0 ceph-mgr[74948]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (monmap changed)...
Nov 29 06:22:45 compute-0 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (monmap changed)...
Nov 29 06:22:45 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
Nov 29 06:22:45 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 29 06:22:45 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) v1
Nov 29 06:22:45 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Nov 29 06:22:45 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 06:22:45 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:22:45 compute-0 ceph-mgr[74948]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Nov 29 06:22:45 compute-0 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Nov 29 06:22:45 compute-0 sudo[99589]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:22:45 compute-0 sudo[99589]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:22:45 compute-0 sudo[99589]: pam_unix(sudo:session): session closed for user root
Nov 29 06:22:45 compute-0 sudo[99614]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:22:45 compute-0 sudo[99614]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:22:45 compute-0 sudo[99614]: pam_unix(sudo:session): session closed for user root
Nov 29 06:22:45 compute-0 sudo[99639]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:22:45 compute-0 sudo[99639]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:22:45 compute-0 sudo[99639]: pam_unix(sudo:session): session closed for user root
Nov 29 06:22:45 compute-0 sudo[99664]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047
Nov 29 06:22:45 compute-0 sudo[99664]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:22:45 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:22:45 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:22:45 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:22:45.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:22:45 compute-0 podman[99706]: 2025-11-29 06:22:45.76617007 +0000 UTC m=+0.116453601 container create 20638be0340ef59ceb2150b24991149b30488ff33d74a10730e2350a8ecb100f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_mirzakhani, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 06:22:45 compute-0 podman[99706]: 2025-11-29 06:22:45.681458252 +0000 UTC m=+0.031741883 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:22:45 compute-0 systemd[1]: Started libpod-conmon-20638be0340ef59ceb2150b24991149b30488ff33d74a10730e2350a8ecb100f.scope.
Nov 29 06:22:45 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:22:46 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v258: 305 pgs: 2 peering, 303 active+clean; 456 KiB data, 121 MiB used, 21 GiB / 21 GiB avail; 41 B/s, 1 objects/s recovering
Nov 29 06:22:46 compute-0 ceph-mon[74654]: 3.2 scrub starts
Nov 29 06:22:46 compute-0 ceph-mon[74654]: 3.2 scrub ok
Nov 29 06:22:46 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:22:46 compute-0 ceph-mon[74654]: pgmap v256: 305 pgs: 2 peering, 303 active+clean; 456 KiB data, 121 MiB used, 21 GiB / 21 GiB avail; 41 B/s, 1 objects/s recovering
Nov 29 06:22:46 compute-0 ceph-mon[74654]: osdmap e87: 3 total, 3 up, 3 in
Nov 29 06:22:46 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:22:46 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 29 06:22:46 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Nov 29 06:22:46 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:22:46 compute-0 podman[99706]: 2025-11-29 06:22:46.275489747 +0000 UTC m=+0.625773368 container init 20638be0340ef59ceb2150b24991149b30488ff33d74a10730e2350a8ecb100f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_mirzakhani, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 29 06:22:46 compute-0 podman[99706]: 2025-11-29 06:22:46.286938373 +0000 UTC m=+0.637221904 container start 20638be0340ef59ceb2150b24991149b30488ff33d74a10730e2350a8ecb100f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_mirzakhani, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 29 06:22:46 compute-0 nifty_mirzakhani[99722]: 167 167
Nov 29 06:22:46 compute-0 systemd[1]: libpod-20638be0340ef59ceb2150b24991149b30488ff33d74a10730e2350a8ecb100f.scope: Deactivated successfully.
Nov 29 06:22:46 compute-0 podman[99706]: 2025-11-29 06:22:46.30863789 +0000 UTC m=+0.658921451 container attach 20638be0340ef59ceb2150b24991149b30488ff33d74a10730e2350a8ecb100f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_mirzakhani, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 29 06:22:46 compute-0 podman[99706]: 2025-11-29 06:22:46.310948568 +0000 UTC m=+0.661232119 container died 20638be0340ef59ceb2150b24991149b30488ff33d74a10730e2350a8ecb100f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_mirzakhani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 06:22:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-30ad1e9783d23b1b12afd731a94a888087c0a1113c88e3620719ff348631cb8e-merged.mount: Deactivated successfully.
Nov 29 06:22:46 compute-0 podman[99706]: 2025-11-29 06:22:46.42371977 +0000 UTC m=+0.774003331 container remove 20638be0340ef59ceb2150b24991149b30488ff33d74a10730e2350a8ecb100f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_mirzakhani, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 29 06:22:46 compute-0 systemd[1]: libpod-conmon-20638be0340ef59ceb2150b24991149b30488ff33d74a10730e2350a8ecb100f.scope: Deactivated successfully.
Nov 29 06:22:46 compute-0 sudo[99664]: pam_unix(sudo:session): session closed for user root
Nov 29 06:22:46 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 06:22:46 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:22:46 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:22:46 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:22:46.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:22:47 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:22:47 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 06:22:47 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:22:47 compute-0 ceph-mgr[74948]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.vxabpq (monmap changed)...
Nov 29 06:22:47 compute-0 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.vxabpq (monmap changed)...
Nov 29 06:22:47 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.vxabpq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Nov 29 06:22:47 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.vxabpq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 29 06:22:47 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 29 06:22:47 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 29 06:22:47 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 06:22:47 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:22:47 compute-0 ceph-mgr[74948]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.vxabpq on compute-0
Nov 29 06:22:47 compute-0 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.vxabpq on compute-0
Nov 29 06:22:47 compute-0 sudo[99741]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:22:47 compute-0 sudo[99741]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:22:47 compute-0 sudo[99741]: pam_unix(sudo:session): session closed for user root
Nov 29 06:22:47 compute-0 sudo[99766]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:22:47 compute-0 sudo[99766]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:22:47 compute-0 sudo[99766]: pam_unix(sudo:session): session closed for user root
Nov 29 06:22:47 compute-0 sudo[99791]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:22:47 compute-0 sudo[99791]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:22:47 compute-0 sudo[99791]: pam_unix(sudo:session): session closed for user root
Nov 29 06:22:47 compute-0 ceph-mon[74654]: Reconfiguring mon.compute-0 (monmap changed)...
Nov 29 06:22:47 compute-0 ceph-mon[74654]: Reconfiguring daemon mon.compute-0 on compute-0
Nov 29 06:22:47 compute-0 sudo[99816]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047
Nov 29 06:22:47 compute-0 ceph-mon[74654]: pgmap v258: 305 pgs: 2 peering, 303 active+clean; 456 KiB data, 121 MiB used, 21 GiB / 21 GiB avail; 41 B/s, 1 objects/s recovering
Nov 29 06:22:47 compute-0 ceph-mon[74654]: 11.8 scrub starts
Nov 29 06:22:47 compute-0 ceph-mon[74654]: 11.8 scrub ok
Nov 29 06:22:47 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:22:47 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:22:47 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.vxabpq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 29 06:22:47 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 29 06:22:47 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:22:47 compute-0 sudo[99816]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:22:47 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:22:47 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000030s ======
Nov 29 06:22:47 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:22:47.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Nov 29 06:22:47 compute-0 podman[99859]: 2025-11-29 06:22:47.760705962 +0000 UTC m=+0.065546246 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:22:47 compute-0 podman[99859]: 2025-11-29 06:22:47.858239436 +0000 UTC m=+0.163079690 container create ca121ab1b226539af7dc25fb8ac890458d4fa259550c2235aa07aca4fc1e5579 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_ritchie, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 29 06:22:47 compute-0 systemd[1]: Started libpod-conmon-ca121ab1b226539af7dc25fb8ac890458d4fa259550c2235aa07aca4fc1e5579.scope.
Nov 29 06:22:47 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:22:48 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v259: 305 pgs: 305 active+clean; 456 KiB data, 121 MiB used, 21 GiB / 21 GiB avail; 27 B/s, 1 objects/s recovering
Nov 29 06:22:48 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} v 0) v1
Nov 29 06:22:48 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Nov 29 06:22:48 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 5.19 scrub starts
Nov 29 06:22:48 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 5.19 scrub ok
Nov 29 06:22:48 compute-0 podman[99859]: 2025-11-29 06:22:48.305418038 +0000 UTC m=+0.610258312 container init ca121ab1b226539af7dc25fb8ac890458d4fa259550c2235aa07aca4fc1e5579 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_ritchie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 29 06:22:48 compute-0 podman[99859]: 2025-11-29 06:22:48.316694869 +0000 UTC m=+0.621535143 container start ca121ab1b226539af7dc25fb8ac890458d4fa259550c2235aa07aca4fc1e5579 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_ritchie, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 29 06:22:48 compute-0 funny_ritchie[99875]: 167 167
Nov 29 06:22:48 compute-0 systemd[1]: libpod-ca121ab1b226539af7dc25fb8ac890458d4fa259550c2235aa07aca4fc1e5579.scope: Deactivated successfully.
Nov 29 06:22:48 compute-0 ceph-mgr[74948]: [progress INFO root] Completed event 7f07609f-e0c7-4950-b4cf-712380532355 (Global Recovery Event) in 33 seconds
Nov 29 06:22:48 compute-0 podman[99859]: 2025-11-29 06:22:48.594481267 +0000 UTC m=+0.899321551 container attach ca121ab1b226539af7dc25fb8ac890458d4fa259550c2235aa07aca4fc1e5579 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_ritchie, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS)
Nov 29 06:22:48 compute-0 podman[99859]: 2025-11-29 06:22:48.595078304 +0000 UTC m=+0.899918588 container died ca121ab1b226539af7dc25fb8ac890458d4fa259550c2235aa07aca4fc1e5579 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_ritchie, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 06:22:48 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e87 do_prune osdmap full prune enabled
Nov 29 06:22:48 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:22:48 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:22:48 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:22:48.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:22:49 compute-0 ceph-mon[74654]: 3.c scrub starts
Nov 29 06:22:49 compute-0 ceph-mon[74654]: 3.c scrub ok
Nov 29 06:22:49 compute-0 ceph-mon[74654]: Reconfiguring mgr.compute-0.vxabpq (monmap changed)...
Nov 29 06:22:49 compute-0 ceph-mon[74654]: Reconfiguring daemon mgr.compute-0.vxabpq on compute-0
Nov 29 06:22:49 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Nov 29 06:22:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-4e15a2e2b64ff8d07b62b298690764860e16653a0358a82663c98515823cc01f-merged.mount: Deactivated successfully.
Nov 29 06:22:49 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:22:49 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:22:49 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:22:49.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:22:49 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Nov 29 06:22:49 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e88 e88: 3 total, 3 up, 3 in
Nov 29 06:22:49 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e88: 3 total, 3 up, 3 in
Nov 29 06:22:50 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v261: 305 pgs: 305 active+clean; 456 KiB data, 121 MiB used, 21 GiB / 21 GiB avail; 25 B/s, 1 objects/s recovering
Nov 29 06:22:50 compute-0 podman[99859]: 2025-11-29 06:22:50.129953869 +0000 UTC m=+2.434794123 container remove ca121ab1b226539af7dc25fb8ac890458d4fa259550c2235aa07aca4fc1e5579 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_ritchie, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 29 06:22:50 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} v 0) v1
Nov 29 06:22:50 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Nov 29 06:22:50 compute-0 systemd[1]: libpod-conmon-ca121ab1b226539af7dc25fb8ac890458d4fa259550c2235aa07aca4fc1e5579.scope: Deactivated successfully.
Nov 29 06:22:50 compute-0 ceph-mon[74654]: pgmap v259: 305 pgs: 305 active+clean; 456 KiB data, 121 MiB used, 21 GiB / 21 GiB avail; 27 B/s, 1 objects/s recovering
Nov 29 06:22:50 compute-0 ceph-mon[74654]: 5.19 scrub starts
Nov 29 06:22:50 compute-0 ceph-mon[74654]: 5.19 scrub ok
Nov 29 06:22:50 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Nov 29 06:22:50 compute-0 ceph-mon[74654]: osdmap e88: 3 total, 3 up, 3 in
Nov 29 06:22:50 compute-0 sudo[99816]: pam_unix(sudo:session): session closed for user root
Nov 29 06:22:50 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 06:22:50 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:22:50 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 06:22:50 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e88 do_prune osdmap full prune enabled
Nov 29 06:22:50 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:22:50 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:22:50 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:22:50.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:22:51 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:22:51 compute-0 ceph-mgr[74948]: [cephadm INFO cephadm.serve] Reconfiguring crash.compute-0 (monmap changed)...
Nov 29 06:22:51 compute-0 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Reconfiguring crash.compute-0 (monmap changed)...
Nov 29 06:22:51 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) v1
Nov 29 06:22:51 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 29 06:22:51 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 06:22:51 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:22:51 compute-0 ceph-mgr[74948]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.compute-0 on compute-0
Nov 29 06:22:51 compute-0 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.compute-0 on compute-0
Nov 29 06:22:51 compute-0 sudo[99905]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:22:51 compute-0 sudo[99905]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:22:51 compute-0 sudo[99905]: pam_unix(sudo:session): session closed for user root
Nov 29 06:22:51 compute-0 sudo[99930]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:22:51 compute-0 sudo[99930]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:22:51 compute-0 sudo[99930]: pam_unix(sudo:session): session closed for user root
Nov 29 06:22:51 compute-0 sudo[99955]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:22:51 compute-0 sudo[99955]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:22:51 compute-0 sudo[99955]: pam_unix(sudo:session): session closed for user root
Nov 29 06:22:51 compute-0 sudo[99980]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047
Nov 29 06:22:51 compute-0 sudo[99980]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:22:51 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Nov 29 06:22:51 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e89 e89: 3 total, 3 up, 3 in
Nov 29 06:22:51 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:22:51 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:22:51 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:22:51.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:22:51 compute-0 podman[100021]: 2025-11-29 06:22:51.614593127 +0000 UTC m=+0.027321233 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:22:51 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e89: 3 total, 3 up, 3 in
Nov 29 06:22:51 compute-0 podman[100021]: 2025-11-29 06:22:51.787092453 +0000 UTC m=+0.199820589 container create b5ce1bc3107a6e338ce1ee009b83570a999a648f2b36a55f0b1f746fb1e876f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_khayyam, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True)
Nov 29 06:22:51 compute-0 systemd[1]: Started libpod-conmon-b5ce1bc3107a6e338ce1ee009b83570a999a648f2b36a55f0b1f746fb1e876f1.scope.
Nov 29 06:22:51 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:22:52 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v263: 305 pgs: 305 active+clean; 456 KiB data, 121 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:22:52 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} v 0) v1
Nov 29 06:22:52 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Nov 29 06:22:52 compute-0 podman[100021]: 2025-11-29 06:22:52.13466438 +0000 UTC m=+0.547392506 container init b5ce1bc3107a6e338ce1ee009b83570a999a648f2b36a55f0b1f746fb1e876f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_khayyam, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 06:22:52 compute-0 ceph-mon[74654]: pgmap v261: 305 pgs: 305 active+clean; 456 KiB data, 121 MiB used, 21 GiB / 21 GiB avail; 25 B/s, 1 objects/s recovering
Nov 29 06:22:52 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Nov 29 06:22:52 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:22:52 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:22:52 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 29 06:22:52 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:22:52 compute-0 podman[100021]: 2025-11-29 06:22:52.143530531 +0000 UTC m=+0.556258637 container start b5ce1bc3107a6e338ce1ee009b83570a999a648f2b36a55f0b1f746fb1e876f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_khayyam, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 29 06:22:52 compute-0 quirky_khayyam[100037]: 167 167
Nov 29 06:22:52 compute-0 systemd[1]: libpod-b5ce1bc3107a6e338ce1ee009b83570a999a648f2b36a55f0b1f746fb1e876f1.scope: Deactivated successfully.
Nov 29 06:22:52 compute-0 sshd-session[99903]: Invalid user user from 104.208.108.166 port 55426
Nov 29 06:22:52 compute-0 sshd-session[99903]: Received disconnect from 104.208.108.166 port 55426:11: Bye Bye [preauth]
Nov 29 06:22:52 compute-0 sshd-session[99903]: Disconnected from invalid user user 104.208.108.166 port 55426 [preauth]
Nov 29 06:22:52 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e89 do_prune osdmap full prune enabled
Nov 29 06:22:52 compute-0 podman[100021]: 2025-11-29 06:22:52.581950546 +0000 UTC m=+0.994678662 container attach b5ce1bc3107a6e338ce1ee009b83570a999a648f2b36a55f0b1f746fb1e876f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_khayyam, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 29 06:22:52 compute-0 podman[100021]: 2025-11-29 06:22:52.58311545 +0000 UTC m=+0.995843566 container died b5ce1bc3107a6e338ce1ee009b83570a999a648f2b36a55f0b1f746fb1e876f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_khayyam, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 06:22:52 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:22:52 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:22:52 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:22:52.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:22:52 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Nov 29 06:22:52 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e90 e90: 3 total, 3 up, 3 in
Nov 29 06:22:52 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e90: 3 total, 3 up, 3 in
Nov 29 06:22:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-a910b4efb09f0310fb20a07eacdcf0d75d5baf45b2c55b3877e5012c8d19fa84-merged.mount: Deactivated successfully.
Nov 29 06:22:53 compute-0 podman[100021]: 2025-11-29 06:22:53.261073559 +0000 UTC m=+1.673801675 container remove b5ce1bc3107a6e338ce1ee009b83570a999a648f2b36a55f0b1f746fb1e876f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_khayyam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 29 06:22:53 compute-0 systemd[1]: libpod-conmon-b5ce1bc3107a6e338ce1ee009b83570a999a648f2b36a55f0b1f746fb1e876f1.scope: Deactivated successfully.
Nov 29 06:22:53 compute-0 ceph-mgr[74948]: [progress INFO root] Writing back 23 completed events
Nov 29 06:22:53 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 29 06:22:53 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:22:53 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:22:53 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:22:53.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:22:53 compute-0 sudo[99980]: pam_unix(sudo:session): session closed for user root
Nov 29 06:22:53 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 06:22:53 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e90 do_prune osdmap full prune enabled
Nov 29 06:22:54 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v265: 305 pgs: 305 active+clean; 456 KiB data, 121 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:22:54 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} v 0) v1
Nov 29 06:22:54 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Nov 29 06:22:54 compute-0 ceph-mgr[74948]: [balancer INFO root] Optimize plan auto_2025-11-29_06:22:54
Nov 29 06:22:54 compute-0 ceph-mgr[74948]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 06:22:54 compute-0 ceph-mgr[74948]: [balancer INFO root] do_upmap
Nov 29 06:22:54 compute-0 ceph-mgr[74948]: [balancer INFO root] pools ['backups', '.rgw.root', 'vms', 'volumes', 'images', '.mgr', 'cephfs.cephfs.data', 'default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.meta', 'default.rgw.log']
Nov 29 06:22:54 compute-0 ceph-mgr[74948]: [balancer INFO root] prepared 0/10 changes
Nov 29 06:22:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:22:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:22:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:22:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:22:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:22:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:22:54 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:22:54 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:22:54 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:22:54.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:22:55 compute-0 ceph-mon[74654]: Reconfiguring crash.compute-0 (monmap changed)...
Nov 29 06:22:55 compute-0 ceph-mon[74654]: 10.11 scrub starts
Nov 29 06:22:55 compute-0 ceph-mon[74654]: Reconfiguring daemon crash.compute-0 on compute-0
Nov 29 06:22:55 compute-0 ceph-mon[74654]: 10.11 scrub ok
Nov 29 06:22:55 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Nov 29 06:22:55 compute-0 ceph-mon[74654]: osdmap e89: 3 total, 3 up, 3 in
Nov 29 06:22:55 compute-0 ceph-mon[74654]: pgmap v263: 305 pgs: 305 active+clean; 456 KiB data, 121 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:22:55 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Nov 29 06:22:55 compute-0 ceph-mon[74654]: 8.b scrub starts
Nov 29 06:22:55 compute-0 ceph-mon[74654]: 8.b scrub ok
Nov 29 06:22:55 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Nov 29 06:22:55 compute-0 ceph-mon[74654]: osdmap e90: 3 total, 3 up, 3 in
Nov 29 06:22:55 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 5.a scrub starts
Nov 29 06:22:55 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 5.a scrub ok
Nov 29 06:22:55 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:22:55 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:22:55 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:22:55.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:22:56 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v266: 305 pgs: 305 active+clean; 456 KiB data, 121 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:22:56 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:22:56 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 3.b scrub starts
Nov 29 06:22:56 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 3.b scrub ok
Nov 29 06:22:56 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} v 0) v1
Nov 29 06:22:56 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Nov 29 06:22:56 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:22:56 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:22:56 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:22:56.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:22:57 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e91 e91: 3 total, 3 up, 3 in
Nov 29 06:22:57 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:22:57 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e91: 3 total, 3 up, 3 in
Nov 29 06:22:57 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 06:22:57 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:22:57 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:22:57 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:22:57.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:22:57 compute-0 ceph-mon[74654]: 11.16 scrub starts
Nov 29 06:22:57 compute-0 ceph-mon[74654]: 11.16 scrub ok
Nov 29 06:22:57 compute-0 ceph-mon[74654]: pgmap v265: 305 pgs: 305 active+clean; 456 KiB data, 121 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:22:57 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Nov 29 06:22:58 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v268: 305 pgs: 2 unknown, 303 active+clean; 456 KiB data, 121 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:22:58 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e91 do_prune osdmap full prune enabled
Nov 29 06:22:58 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:22:58 compute-0 ceph-mgr[74948]: [cephadm INFO cephadm.serve] Reconfiguring osd.1 (monmap changed)...
Nov 29 06:22:58 compute-0 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Reconfiguring osd.1 (monmap changed)...
Nov 29 06:22:58 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0) v1
Nov 29 06:22:58 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Nov 29 06:22:58 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 06:22:58 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:22:58 compute-0 ceph-mgr[74948]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.1 on compute-0
Nov 29 06:22:58 compute-0 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.1 on compute-0
Nov 29 06:22:58 compute-0 sudo[100080]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:22:58 compute-0 sudo[100080]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:22:58 compute-0 sudo[100080]: pam_unix(sudo:session): session closed for user root
Nov 29 06:22:58 compute-0 sudo[100105]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:22:58 compute-0 sudo[100105]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:22:58 compute-0 sudo[100105]: pam_unix(sudo:session): session closed for user root
Nov 29 06:22:58 compute-0 sudo[100130]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:22:58 compute-0 sudo[100130]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:22:58 compute-0 sudo[100130]: pam_unix(sudo:session): session closed for user root
Nov 29 06:22:58 compute-0 sudo[100155]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047
Nov 29 06:22:58 compute-0 sudo[100155]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:22:58 compute-0 sudo[100161]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:22:58 compute-0 sudo[100161]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:22:58 compute-0 sudo[100161]: pam_unix(sudo:session): session closed for user root
Nov 29 06:22:58 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:22:58 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:22:58 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:22:58.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:22:58 compute-0 sudo[100205]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:22:58 compute-0 sudo[100205]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:22:58 compute-0 sudo[100205]: pam_unix(sudo:session): session closed for user root
Nov 29 06:22:59 compute-0 podman[100245]: 2025-11-29 06:22:59.033589639 +0000 UTC m=+0.032225707 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:22:59 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:22:59 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000030s ======
Nov 29 06:22:59 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:22:59.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Nov 29 06:23:00 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v269: 305 pgs: 2 unknown, 303 active+clean; 456 KiB data, 121 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:23:00 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:23:00 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:23:00 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:23:00.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:23:01 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 3.1f deep-scrub starts
Nov 29 06:23:01 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Nov 29 06:23:01 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Nov 29 06:23:01 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e92 e92: 3 total, 3 up, 3 in
Nov 29 06:23:01 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:23:01 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:23:01 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:23:01.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:23:01 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 3.1f deep-scrub ok
Nov 29 06:23:01 compute-0 podman[100245]: 2025-11-29 06:23:01.859081135 +0000 UTC m=+2.857717183 container create 9ea8cda7476f620017cb4d19e175a3bf63ad1bdff28063d9412adad19e3a3cdb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_lamarr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 29 06:23:01 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e92: 3 total, 3 up, 3 in
Nov 29 06:23:02 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v271: 305 pgs: 2 unknown, 303 active+clean; 456 KiB data, 122 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:23:02 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:23:02 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000030s ======
Nov 29 06:23:02 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:23:02.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Nov 29 06:23:03 compute-0 systemd[1]: Started libpod-conmon-9ea8cda7476f620017cb4d19e175a3bf63ad1bdff28063d9412adad19e3a3cdb.scope.
Nov 29 06:23:03 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e92 do_prune osdmap full prune enabled
Nov 29 06:23:03 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:23:03 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:23:03 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000030s ======
Nov 29 06:23:03 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:23:03.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Nov 29 06:23:04 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v272: 305 pgs: 2 active+remapped, 303 active+clean; 456 KiB data, 122 MiB used, 21 GiB / 21 GiB avail; 41 B/s, 1 objects/s recovering
Nov 29 06:23:04 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} v 0) v1
Nov 29 06:23:04 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Nov 29 06:23:04 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 8.1 scrub starts
Nov 29 06:23:04 compute-0 ceph-mon[74654]: 3.3 scrub starts
Nov 29 06:23:04 compute-0 ceph-mon[74654]: 3.3 scrub ok
Nov 29 06:23:04 compute-0 ceph-mon[74654]: 10.3 scrub starts
Nov 29 06:23:04 compute-0 ceph-mon[74654]: 5.a scrub starts
Nov 29 06:23:04 compute-0 ceph-mon[74654]: 5.a scrub ok
Nov 29 06:23:04 compute-0 ceph-mon[74654]: 10.3 scrub ok
Nov 29 06:23:04 compute-0 ceph-mon[74654]: pgmap v266: 305 pgs: 305 active+clean; 456 KiB data, 121 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:23:04 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:23:04 compute-0 ceph-mon[74654]: 10.10 scrub starts
Nov 29 06:23:04 compute-0 ceph-mon[74654]: 10.10 scrub ok
Nov 29 06:23:04 compute-0 ceph-mon[74654]: 3.b scrub starts
Nov 29 06:23:04 compute-0 ceph-mon[74654]: 3.b scrub ok
Nov 29 06:23:04 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Nov 29 06:23:04 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:23:04 compute-0 ceph-mon[74654]: osdmap e91: 3 total, 3 up, 3 in
Nov 29 06:23:04 compute-0 ceph-mon[74654]: pgmap v268: 305 pgs: 2 unknown, 303 active+clean; 456 KiB data, 121 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:23:04 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:23:04 compute-0 ceph-mon[74654]: Reconfiguring osd.1 (monmap changed)...
Nov 29 06:23:04 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Nov 29 06:23:04 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:23:04 compute-0 ceph-mon[74654]: Reconfiguring daemon osd.1 on compute-0
Nov 29 06:23:04 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:23:04 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:23:04 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:23:04.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:23:04 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 8.1 scrub ok
Nov 29 06:23:05 compute-0 podman[100245]: 2025-11-29 06:23:05.08913605 +0000 UTC m=+6.087772128 container init 9ea8cda7476f620017cb4d19e175a3bf63ad1bdff28063d9412adad19e3a3cdb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_lamarr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 29 06:23:05 compute-0 podman[100245]: 2025-11-29 06:23:05.100618198 +0000 UTC m=+6.099254286 container start 9ea8cda7476f620017cb4d19e175a3bf63ad1bdff28063d9412adad19e3a3cdb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_lamarr, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 06:23:05 compute-0 elastic_lamarr[100264]: 167 167
Nov 29 06:23:05 compute-0 systemd[1]: libpod-9ea8cda7476f620017cb4d19e175a3bf63ad1bdff28063d9412adad19e3a3cdb.scope: Deactivated successfully.
Nov 29 06:23:05 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:23:05 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:23:05 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:23:05.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:23:06 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v273: 305 pgs: 2 active+remapped, 303 active+clean; 456 KiB data, 122 MiB used, 21 GiB / 21 GiB avail; 12 KiB/s rd, 119 B/s wr, 20 op/s; 76 B/s, 2 objects/s recovering
Nov 29 06:23:06 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} v 0) v1
Nov 29 06:23:06 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Nov 29 06:23:06 compute-0 podman[100245]: 2025-11-29 06:23:06.24891551 +0000 UTC m=+7.247551588 container attach 9ea8cda7476f620017cb4d19e175a3bf63ad1bdff28063d9412adad19e3a3cdb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_lamarr, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 06:23:06 compute-0 podman[100245]: 2025-11-29 06:23:06.249789785 +0000 UTC m=+7.248425833 container died 9ea8cda7476f620017cb4d19e175a3bf63ad1bdff28063d9412adad19e3a3cdb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_lamarr, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 29 06:23:06 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e93 e93: 3 total, 3 up, 3 in
Nov 29 06:23:06 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:23:06 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:23:06 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:23:06.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:23:07 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e93: 3 total, 3 up, 3 in
Nov 29 06:23:07 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 8.7 deep-scrub starts
Nov 29 06:23:07 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 8.7 deep-scrub ok
Nov 29 06:23:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-569ce47c36525046265bdd1c2731c732dd0a48cce7ed60c4ab8bb425936298d4-merged.mount: Deactivated successfully.
Nov 29 06:23:07 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:23:07 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:23:07 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:23:07.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:23:07 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e93 do_prune osdmap full prune enabled
Nov 29 06:23:07 compute-0 ceph-mon[74654]: 10.1 scrub starts
Nov 29 06:23:07 compute-0 ceph-mon[74654]: 10.1 scrub ok
Nov 29 06:23:07 compute-0 ceph-mon[74654]: pgmap v269: 305 pgs: 2 unknown, 303 active+clean; 456 KiB data, 121 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:23:07 compute-0 ceph-mon[74654]: 5.9 scrub starts
Nov 29 06:23:07 compute-0 ceph-mon[74654]: 5.9 scrub ok
Nov 29 06:23:07 compute-0 ceph-mon[74654]: 3.1f deep-scrub starts
Nov 29 06:23:07 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Nov 29 06:23:07 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Nov 29 06:23:07 compute-0 ceph-mon[74654]: 3.1f deep-scrub ok
Nov 29 06:23:07 compute-0 ceph-mon[74654]: osdmap e92: 3 total, 3 up, 3 in
Nov 29 06:23:07 compute-0 ceph-mon[74654]: pgmap v271: 305 pgs: 2 unknown, 303 active+clean; 456 KiB data, 122 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:23:07 compute-0 ceph-mon[74654]: 8.15 deep-scrub starts
Nov 29 06:23:07 compute-0 ceph-mon[74654]: 8.15 deep-scrub ok
Nov 29 06:23:07 compute-0 ceph-mon[74654]: 3.f scrub starts
Nov 29 06:23:07 compute-0 ceph-mon[74654]: 3.f scrub ok
Nov 29 06:23:07 compute-0 ceph-mon[74654]: 8.5 scrub starts
Nov 29 06:23:07 compute-0 ceph-mon[74654]: 8.5 scrub ok
Nov 29 06:23:07 compute-0 ceph-mon[74654]: pgmap v272: 305 pgs: 2 active+remapped, 303 active+clean; 456 KiB data, 122 MiB used, 21 GiB / 21 GiB avail; 41 B/s, 1 objects/s recovering
Nov 29 06:23:07 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Nov 29 06:23:07 compute-0 ceph-mon[74654]: 8.1 scrub starts
Nov 29 06:23:08 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v275: 305 pgs: 2 active+remapped, 303 active+clean; 456 KiB data, 122 MiB used, 21 GiB / 21 GiB avail; 12 KiB/s rd, 127 B/s wr, 22 op/s; 82 B/s, 3 objects/s recovering
Nov 29 06:23:08 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:23:08 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:23:08 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:23:08.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:23:09 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} v 0) v1
Nov 29 06:23:09 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Nov 29 06:23:09 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:23:09 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000030s ======
Nov 29 06:23:09 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:23:09.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Nov 29 06:23:09 compute-0 podman[100245]: 2025-11-29 06:23:09.84247574 +0000 UTC m=+10.841111798 container remove 9ea8cda7476f620017cb4d19e175a3bf63ad1bdff28063d9412adad19e3a3cdb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_lamarr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 06:23:09 compute-0 systemd[1]: libpod-conmon-9ea8cda7476f620017cb4d19e175a3bf63ad1bdff28063d9412adad19e3a3cdb.scope: Deactivated successfully.
Nov 29 06:23:10 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v276: 305 pgs: 2 active+remapped, 303 active+clean; 456 KiB data, 122 MiB used, 21 GiB / 21 GiB avail; 12 KiB/s rd, 0 B/s wr, 21 op/s; 80 B/s, 3 objects/s recovering
Nov 29 06:23:10 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} v 0) v1
Nov 29 06:23:10 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Nov 29 06:23:10 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 8.e scrub starts
Nov 29 06:23:10 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 8.e scrub ok
Nov 29 06:23:10 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:23:10 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:23:10 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:23:10.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:23:11 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 8.13 scrub starts
Nov 29 06:23:11 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:23:11 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:23:11 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:23:11.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:23:12 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v277: 305 pgs: 2 peering, 303 active+clean; 456 KiB data, 122 MiB used, 21 GiB / 21 GiB avail; 9.6 KiB/s rd, 0 B/s wr, 17 op/s; 32 B/s, 1 objects/s recovering
Nov 29 06:23:12 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Nov 29 06:23:12 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Nov 29 06:23:12 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e94 e94: 3 total, 3 up, 3 in
Nov 29 06:23:12 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 8.13 scrub ok
Nov 29 06:23:12 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e94: 3 total, 3 up, 3 in
Nov 29 06:23:12 compute-0 sudo[100155]: pam_unix(sudo:session): session closed for user root
Nov 29 06:23:12 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 06:23:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 06:23:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:23:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 06:23:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:23:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:23:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:23:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:23:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:23:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:23:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:23:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:23:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:23:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 06:23:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:23:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:23:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:23:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Nov 29 06:23:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:23:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.724886004094547e-06 of space, bias 1.0, pg target 0.002017465801228364 quantized to 32 (current 32)
Nov 29 06:23:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:23:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:23:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:23:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 06:23:12 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:23:12 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:23:12 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:23:12.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:23:13 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e94 do_prune osdmap full prune enabled
Nov 29 06:23:13 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:23:13 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:23:13 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:23:13.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:23:14 compute-0 ceph-mon[74654]: 8.1 scrub ok
Nov 29 06:23:14 compute-0 ceph-mon[74654]: 5.16 deep-scrub starts
Nov 29 06:23:14 compute-0 ceph-mon[74654]: 5.16 deep-scrub ok
Nov 29 06:23:14 compute-0 ceph-mon[74654]: pgmap v273: 305 pgs: 2 active+remapped, 303 active+clean; 456 KiB data, 122 MiB used, 21 GiB / 21 GiB avail; 12 KiB/s rd, 119 B/s wr, 20 op/s; 76 B/s, 2 objects/s recovering
Nov 29 06:23:14 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Nov 29 06:23:14 compute-0 ceph-mon[74654]: osdmap e93: 3 total, 3 up, 3 in
Nov 29 06:23:14 compute-0 ceph-mon[74654]: 8.2 scrub starts
Nov 29 06:23:14 compute-0 ceph-mon[74654]: 8.2 scrub ok
Nov 29 06:23:14 compute-0 ceph-mon[74654]: 8.7 deep-scrub starts
Nov 29 06:23:14 compute-0 ceph-mon[74654]: 8.7 deep-scrub ok
Nov 29 06:23:14 compute-0 ceph-mon[74654]: pgmap v275: 305 pgs: 2 active+remapped, 303 active+clean; 456 KiB data, 122 MiB used, 21 GiB / 21 GiB avail; 12 KiB/s rd, 127 B/s wr, 22 op/s; 82 B/s, 3 objects/s recovering
Nov 29 06:23:14 compute-0 ceph-mon[74654]: 3.10 scrub starts
Nov 29 06:23:14 compute-0 ceph-mon[74654]: 8.16 scrub starts
Nov 29 06:23:14 compute-0 ceph-mon[74654]: 3.10 scrub ok
Nov 29 06:23:14 compute-0 ceph-mon[74654]: 8.16 scrub ok
Nov 29 06:23:14 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Nov 29 06:23:14 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v279: 305 pgs: 2 peering, 303 active+clean; 456 KiB data, 122 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:23:14 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 8.1a deep-scrub starts
Nov 29 06:23:14 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 8.1a deep-scrub ok
Nov 29 06:23:14 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:23:14 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:23:14 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:23:14.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:23:15 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:23:15 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000030s ======
Nov 29 06:23:15 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:23:15.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Nov 29 06:23:16 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v280: 305 pgs: 2 peering, 303 active+clean; 456 KiB data, 122 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:23:16 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Nov 29 06:23:16 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Nov 29 06:23:16 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e95 e95: 3 total, 3 up, 3 in
Nov 29 06:23:16 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:23:16 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000030s ======
Nov 29 06:23:16 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:23:16.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Nov 29 06:23:17 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:23:17 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e95: 3 total, 3 up, 3 in
Nov 29 06:23:17 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 06:23:17 compute-0 ceph-mon[74654]: pgmap v276: 305 pgs: 2 active+remapped, 303 active+clean; 456 KiB data, 122 MiB used, 21 GiB / 21 GiB avail; 12 KiB/s rd, 0 B/s wr, 21 op/s; 80 B/s, 3 objects/s recovering
Nov 29 06:23:17 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Nov 29 06:23:17 compute-0 ceph-mon[74654]: 10.6 scrub starts
Nov 29 06:23:17 compute-0 ceph-mon[74654]: 8.e scrub starts
Nov 29 06:23:17 compute-0 ceph-mon[74654]: 8.e scrub ok
Nov 29 06:23:17 compute-0 ceph-mon[74654]: 10.6 scrub ok
Nov 29 06:23:17 compute-0 ceph-mon[74654]: 8.13 scrub starts
Nov 29 06:23:17 compute-0 ceph-mon[74654]: pgmap v277: 305 pgs: 2 peering, 303 active+clean; 456 KiB data, 122 MiB used, 21 GiB / 21 GiB avail; 9.6 KiB/s rd, 0 B/s wr, 17 op/s; 32 B/s, 1 objects/s recovering
Nov 29 06:23:17 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Nov 29 06:23:17 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Nov 29 06:23:17 compute-0 ceph-mon[74654]: 8.13 scrub ok
Nov 29 06:23:17 compute-0 ceph-mon[74654]: osdmap e94: 3 total, 3 up, 3 in
Nov 29 06:23:17 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:23:17 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:23:17 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:23:17.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:23:18 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e95 do_prune osdmap full prune enabled
Nov 29 06:23:18 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v282: 305 pgs: 2 unknown, 303 active+clean; 456 KiB data, 122 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:23:18 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:23:18 compute-0 ceph-mgr[74948]: [cephadm INFO cephadm.serve] Reconfiguring crash.compute-1 (monmap changed)...
Nov 29 06:23:18 compute-0 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Reconfiguring crash.compute-1 (monmap changed)...
Nov 29 06:23:18 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) v1
Nov 29 06:23:18 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 29 06:23:18 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 06:23:18 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:23:18 compute-0 ceph-mgr[74948]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.compute-1 on compute-1
Nov 29 06:23:18 compute-0 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.compute-1 on compute-1
Nov 29 06:23:18 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e96 e96: 3 total, 3 up, 3 in
Nov 29 06:23:18 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e96: 3 total, 3 up, 3 in
Nov 29 06:23:18 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:23:18 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:23:18 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:23:18.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:23:18 compute-0 sudo[100298]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:23:18 compute-0 sudo[100298]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:23:18 compute-0 sudo[100298]: pam_unix(sudo:session): session closed for user root
Nov 29 06:23:19 compute-0 sudo[100323]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:23:19 compute-0 sudo[100323]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:23:19 compute-0 sudo[100323]: pam_unix(sudo:session): session closed for user root
Nov 29 06:23:19 compute-0 ceph-mon[74654]: 8.9 deep-scrub starts
Nov 29 06:23:19 compute-0 ceph-mon[74654]: 8.9 deep-scrub ok
Nov 29 06:23:19 compute-0 ceph-mon[74654]: pgmap v279: 305 pgs: 2 peering, 303 active+clean; 456 KiB data, 122 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:23:19 compute-0 ceph-mon[74654]: 8.1a deep-scrub starts
Nov 29 06:23:19 compute-0 ceph-mon[74654]: 8.1a deep-scrub ok
Nov 29 06:23:19 compute-0 ceph-mon[74654]: 11.a scrub starts
Nov 29 06:23:19 compute-0 ceph-mon[74654]: 11.a scrub ok
Nov 29 06:23:19 compute-0 ceph-mon[74654]: 8.a scrub starts
Nov 29 06:23:19 compute-0 ceph-mon[74654]: pgmap v280: 305 pgs: 2 peering, 303 active+clean; 456 KiB data, 122 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:23:19 compute-0 ceph-mon[74654]: 8.a scrub ok
Nov 29 06:23:19 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Nov 29 06:23:19 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Nov 29 06:23:19 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:23:19 compute-0 ceph-mon[74654]: osdmap e95: 3 total, 3 up, 3 in
Nov 29 06:23:19 compute-0 ceph-mon[74654]: 10.7 scrub starts
Nov 29 06:23:19 compute-0 ceph-mon[74654]: 10.7 scrub ok
Nov 29 06:23:19 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:23:19 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 29 06:23:19 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 06:23:19 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:23:19 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 06:23:19 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:23:19 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000030s ======
Nov 29 06:23:19 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:23:19.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Nov 29 06:23:19 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e96 do_prune osdmap full prune enabled
Nov 29 06:23:20 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v284: 305 pgs: 2 unknown, 303 active+clean; 456 KiB data, 122 MiB used, 21 GiB / 21 GiB avail; 2 B/s, 0 objects/s recovering
Nov 29 06:23:20 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:23:20 compute-0 ceph-mgr[74948]: [cephadm INFO cephadm.serve] Reconfiguring osd.0 (monmap changed)...
Nov 29 06:23:20 compute-0 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Reconfiguring osd.0 (monmap changed)...
Nov 29 06:23:20 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 8.1d scrub starts
Nov 29 06:23:20 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 8.1d scrub ok
Nov 29 06:23:20 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:23:20 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:23:20 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:23:20.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:23:20 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0) v1
Nov 29 06:23:20 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Nov 29 06:23:20 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 06:23:20 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:23:20 compute-0 ceph-mgr[74948]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.0 on compute-1
Nov 29 06:23:20 compute-0 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.0 on compute-1
Nov 29 06:23:21 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e97 e97: 3 total, 3 up, 3 in
Nov 29 06:23:21 compute-0 ceph-mon[74654]: pgmap v282: 305 pgs: 2 unknown, 303 active+clean; 456 KiB data, 122 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:23:21 compute-0 ceph-mon[74654]: 8.f scrub starts
Nov 29 06:23:21 compute-0 ceph-mon[74654]: 8.f scrub ok
Nov 29 06:23:21 compute-0 ceph-mon[74654]: Reconfiguring crash.compute-1 (monmap changed)...
Nov 29 06:23:21 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:23:21 compute-0 ceph-mon[74654]: Reconfiguring daemon crash.compute-1 on compute-1
Nov 29 06:23:21 compute-0 ceph-mon[74654]: osdmap e96: 3 total, 3 up, 3 in
Nov 29 06:23:21 compute-0 ceph-mon[74654]: 8.d scrub starts
Nov 29 06:23:21 compute-0 ceph-mon[74654]: 8.d scrub ok
Nov 29 06:23:21 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:23:21 compute-0 sudo[98010]: pam_unix(sudo:session): session closed for user root
Nov 29 06:23:21 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:23:21 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:23:21 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:23:21.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:23:21 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e97: 3 total, 3 up, 3 in
Nov 29 06:23:21 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e97 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 06:23:22 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v286: 305 pgs: 2 unknown, 303 active+clean; 456 KiB data, 122 MiB used, 21 GiB / 21 GiB avail; 2 B/s, 0 objects/s recovering
Nov 29 06:23:22 compute-0 sudo[100498]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ycapblehmiperbgoepmbhisbssmkdphw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397401.8430727-374-186992140637724/AnsiballZ_command.py'
Nov 29 06:23:22 compute-0 sudo[100498]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:23:22 compute-0 python3.9[100500]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:23:22 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 8.1e scrub starts
Nov 29 06:23:22 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 8.1e scrub ok
Nov 29 06:23:22 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 06:23:22 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:23:22 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:23:22 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:23:22.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:23:23 compute-0 sudo[100498]: pam_unix(sudo:session): session closed for user root
Nov 29 06:23:23 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:23:23 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:23:23 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:23:23.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:23:23 compute-0 ceph-mon[74654]: 10.9 scrub starts
Nov 29 06:23:23 compute-0 ceph-mon[74654]: 10.9 scrub ok
Nov 29 06:23:23 compute-0 ceph-mon[74654]: pgmap v284: 305 pgs: 2 unknown, 303 active+clean; 456 KiB data, 122 MiB used, 21 GiB / 21 GiB avail; 2 B/s, 0 objects/s recovering
Nov 29 06:23:23 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:23:23 compute-0 ceph-mon[74654]: Reconfiguring osd.0 (monmap changed)...
Nov 29 06:23:23 compute-0 ceph-mon[74654]: 8.1d scrub starts
Nov 29 06:23:23 compute-0 ceph-mon[74654]: 8.1d scrub ok
Nov 29 06:23:23 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Nov 29 06:23:23 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:23:23 compute-0 ceph-mon[74654]: Reconfiguring daemon osd.0 on compute-1
Nov 29 06:23:23 compute-0 ceph-mon[74654]: osdmap e97: 3 total, 3 up, 3 in
Nov 29 06:23:24 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:23:24 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 06:23:24 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v287: 305 pgs: 1 active+clean+scrubbing, 2 activating+remapped, 302 active+clean; 456 KiB data, 139 MiB used, 21 GiB / 21 GiB avail; 14 KiB/s rd, 296 B/s wr, 25 op/s; 12/214 objects misplaced (5.607%); 18 B/s, 1 objects/s recovering
Nov 29 06:23:24 compute-0 sshd-session[100661]: Invalid user mysql from 138.124.186.225 port 53646
Nov 29 06:23:24 compute-0 sshd-session[100661]: Received disconnect from 138.124.186.225 port 53646:11: Bye Bye [preauth]
Nov 29 06:23:24 compute-0 sshd-session[100661]: Disconnected from invalid user mysql 138.124.186.225 port 53646 [preauth]
Nov 29 06:23:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:23:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:23:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:23:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:23:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:23:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:23:24 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:23:24 compute-0 ceph-mgr[74948]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-1 (monmap changed)...
Nov 29 06:23:24 compute-0 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-1 (monmap changed)...
Nov 29 06:23:24 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
Nov 29 06:23:24 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 29 06:23:24 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) v1
Nov 29 06:23:24 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Nov 29 06:23:24 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 06:23:24 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:23:24 compute-0 ceph-mgr[74948]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-1 on compute-1
Nov 29 06:23:24 compute-0 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-1 on compute-1
Nov 29 06:23:24 compute-0 sudo[100790]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-piowdmdqoxhvjgekljvcqhseynkonrnf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397403.7875595-398-261855932790996/AnsiballZ_selinux.py'
Nov 29 06:23:24 compute-0 sudo[100790]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:23:24 compute-0 python3.9[100792]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Nov 29 06:23:24 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:23:24 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:23:24 compute-0 sudo[100790]: pam_unix(sudo:session): session closed for user root
Nov 29 06:23:24 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:23:24.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:23:25 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e97 do_prune osdmap full prune enabled
Nov 29 06:23:25 compute-0 sshd-session[100738]: Received disconnect from 31.6.212.12 port 36642:11: Bye Bye [preauth]
Nov 29 06:23:25 compute-0 sshd-session[100738]: Disconnected from authenticating user root 31.6.212.12 port 36642 [preauth]
Nov 29 06:23:25 compute-0 ceph-mon[74654]: pgmap v286: 305 pgs: 2 unknown, 303 active+clean; 456 KiB data, 122 MiB used, 21 GiB / 21 GiB avail; 2 B/s, 0 objects/s recovering
Nov 29 06:23:25 compute-0 ceph-mon[74654]: 8.1e scrub starts
Nov 29 06:23:25 compute-0 ceph-mon[74654]: 8.1e scrub ok
Nov 29 06:23:25 compute-0 ceph-mon[74654]: 11.19 scrub starts
Nov 29 06:23:25 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:23:25 compute-0 ceph-mon[74654]: pgmap v287: 305 pgs: 1 active+clean+scrubbing, 2 activating+remapped, 302 active+clean; 456 KiB data, 139 MiB used, 21 GiB / 21 GiB avail; 14 KiB/s rd, 296 B/s wr, 25 op/s; 12/214 objects misplaced (5.607%); 18 B/s, 1 objects/s recovering
Nov 29 06:23:25 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:23:25 compute-0 ceph-mon[74654]: Reconfiguring mon.compute-1 (monmap changed)...
Nov 29 06:23:25 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 29 06:23:25 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Nov 29 06:23:25 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:23:25 compute-0 ceph-mon[74654]: Reconfiguring daemon mon.compute-1 on compute-1
Nov 29 06:23:25 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:23:25 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:23:25 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:23:25.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:23:25 compute-0 sudo[100943]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-owvhjlglqnuwchxtkklkewhpuurcvdko ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397405.4949303-431-41910779164387/AnsiballZ_command.py'
Nov 29 06:23:25 compute-0 sudo[100943]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:23:25 compute-0 python3.9[100945]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Nov 29 06:23:25 compute-0 sudo[100943]: pam_unix(sudo:session): session closed for user root
Nov 29 06:23:26 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v288: 305 pgs: 1 active+clean+scrubbing, 2 activating+remapped, 302 active+clean; 456 KiB data, 139 MiB used, 21 GiB / 21 GiB avail; 12 KiB/s rd, 255 B/s wr, 22 op/s; 12/214 objects misplaced (5.607%); 15 B/s, 1 objects/s recovering
Nov 29 06:23:26 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 06:23:26 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e98 e98: 3 total, 3 up, 3 in
Nov 29 06:23:26 compute-0 sudo[101095]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqmmyrsqmvnptbzqbwyoyvkwvfliydpz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397406.31042-455-9877414611239/AnsiballZ_file.py'
Nov 29 06:23:26 compute-0 sudo[101095]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:23:26 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:23:26 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:23:26 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:23:26.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:23:26 compute-0 python3.9[101097]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:23:26 compute-0 sudo[101095]: pam_unix(sudo:session): session closed for user root
Nov 29 06:23:26 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e98: 3 total, 3 up, 3 in
Nov 29 06:23:27 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e98 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 06:23:27 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e98 do_prune osdmap full prune enabled
Nov 29 06:23:27 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:23:27 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:23:27 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:23:27.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:23:27 compute-0 sudo[101248]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lapekturfdebwzkabmjmudnyqglylxiv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397407.2332819-479-171453534044405/AnsiballZ_mount.py'
Nov 29 06:23:27 compute-0 sudo[101248]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:23:28 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v290: 305 pgs: 1 active+clean+scrubbing, 2 activating+remapped, 302 active+clean; 456 KiB data, 143 MiB used, 21 GiB / 21 GiB avail; 12 KiB/s rd, 255 B/s wr, 22 op/s; 12/214 objects misplaced (5.607%); 27 B/s, 1 objects/s recovering
Nov 29 06:23:28 compute-0 python3.9[101250]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Nov 29 06:23:28 compute-0 sudo[101248]: pam_unix(sudo:session): session closed for user root
Nov 29 06:23:28 compute-0 ceph-mon[74654]: 11.19 scrub ok
Nov 29 06:23:28 compute-0 ceph-mon[74654]: 11.e scrub starts
Nov 29 06:23:28 compute-0 ceph-mon[74654]: 11.e scrub ok
Nov 29 06:23:28 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:23:28 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 06:23:28 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 9.1 scrub starts
Nov 29 06:23:28 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 9.1 scrub ok
Nov 29 06:23:28 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:23:28 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:23:28 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:23:28.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:23:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 06:23:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 06:23:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 06:23:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 06:23:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 06:23:29 compute-0 sudo[101401]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jgwsfhrfqiuzwvdohfbgdegwynmvhxen ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397409.1829958-563-242523083676592/AnsiballZ_file.py'
Nov 29 06:23:29 compute-0 sudo[101401]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:23:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 06:23:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 06:23:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 06:23:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 06:23:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 06:23:29 compute-0 python3.9[101403]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 06:23:29 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:23:29 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:23:29 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:23:29.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:23:29 compute-0 sudo[101401]: pam_unix(sudo:session): session closed for user root
Nov 29 06:23:29 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e99 e99: 3 total, 3 up, 3 in
Nov 29 06:23:30 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e99: 3 total, 3 up, 3 in
Nov 29 06:23:30 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v292: 305 pgs: 2 activating+remapped, 303 active+clean; 456 KiB data, 143 MiB used, 21 GiB / 21 GiB avail; 12 KiB/s rd, 255 B/s wr, 22 op/s; 12/214 objects misplaced (5.607%); 27 B/s, 1 objects/s recovering
Nov 29 06:23:30 compute-0 sudo[101553]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fuhadgpekviscrzclpjstmbermhohfvm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397410.0018327-587-158999121433402/AnsiballZ_stat.py'
Nov 29 06:23:30 compute-0 sudo[101553]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:23:30 compute-0 python3.9[101555]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:23:30 compute-0 ceph-mon[74654]: 10.a scrub starts
Nov 29 06:23:30 compute-0 ceph-mon[74654]: 10.a scrub ok
Nov 29 06:23:30 compute-0 ceph-mon[74654]: pgmap v288: 305 pgs: 1 active+clean+scrubbing, 2 activating+remapped, 302 active+clean; 456 KiB data, 139 MiB used, 21 GiB / 21 GiB avail; 12 KiB/s rd, 255 B/s wr, 22 op/s; 12/214 objects misplaced (5.607%); 15 B/s, 1 objects/s recovering
Nov 29 06:23:30 compute-0 ceph-mon[74654]: osdmap e98: 3 total, 3 up, 3 in
Nov 29 06:23:30 compute-0 ceph-mon[74654]: pgmap v290: 305 pgs: 1 active+clean+scrubbing, 2 activating+remapped, 302 active+clean; 456 KiB data, 143 MiB used, 21 GiB / 21 GiB avail; 12 KiB/s rd, 255 B/s wr, 22 op/s; 12/214 objects misplaced (5.607%); 27 B/s, 1 objects/s recovering
Nov 29 06:23:30 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:23:30 compute-0 ceph-mon[74654]: 9.1 scrub starts
Nov 29 06:23:30 compute-0 ceph-mon[74654]: 9.1 scrub ok
Nov 29 06:23:30 compute-0 sudo[101553]: pam_unix(sudo:session): session closed for user root
Nov 29 06:23:30 compute-0 sudo[101631]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zobwjxvnwgxqiabdohtjgjwlwnfyhmfk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397410.0018327-587-158999121433402/AnsiballZ_file.py'
Nov 29 06:23:30 compute-0 sudo[101631]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:23:30 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:23:30 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:23:30 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:23:30.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:23:31 compute-0 python3.9[101634]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:23:31 compute-0 sudo[101631]: pam_unix(sudo:session): session closed for user root
Nov 29 06:23:31 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:23:31 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:23:31 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:23:31.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:23:32 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v293: 305 pgs: 305 active+clean; 456 KiB data, 143 MiB used, 21 GiB / 21 GiB avail; 27 B/s, 1 objects/s recovering
Nov 29 06:23:32 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:23:32 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} v 0) v1
Nov 29 06:23:32 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Nov 29 06:23:32 compute-0 ceph-mgr[74948]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-2 (monmap changed)...
Nov 29 06:23:32 compute-0 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-2 (monmap changed)...
Nov 29 06:23:32 compute-0 sudo[101784]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zttlfxlrixdqhbspsjcimevyqmwppkhq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397411.9999702-650-175325142739594/AnsiballZ_stat.py'
Nov 29 06:23:32 compute-0 sudo[101784]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:23:32 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 9.2 scrub starts
Nov 29 06:23:32 compute-0 python3.9[101786]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 06:23:32 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 9.2 scrub ok
Nov 29 06:23:32 compute-0 sudo[101784]: pam_unix(sudo:session): session closed for user root
Nov 29 06:23:32 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e99 do_prune osdmap full prune enabled
Nov 29 06:23:32 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
Nov 29 06:23:32 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 29 06:23:32 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) v1
Nov 29 06:23:32 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Nov 29 06:23:32 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 06:23:32 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:23:32 compute-0 ceph-mgr[74948]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-2 on compute-2
Nov 29 06:23:32 compute-0 ceph-mgr[74948]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-2 on compute-2
Nov 29 06:23:32 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:23:32 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000030s ======
Nov 29 06:23:32 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:23:32.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Nov 29 06:23:33 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:23:33 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:23:33 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:23:33.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:23:33 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 06:23:33 compute-0 sudo[101939]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sklqemqnpllwhwwmmqshmawdohjevwht ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397413.274363-689-64993726198437/AnsiballZ_getent.py'
Nov 29 06:23:33 compute-0 sudo[101939]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:23:34 compute-0 python3.9[101941]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Nov 29 06:23:34 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v294: 305 pgs: 305 active+clean; 456 KiB data, 143 MiB used, 21 GiB / 21 GiB avail; 27 B/s, 1 objects/s recovering
Nov 29 06:23:34 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} v 0) v1
Nov 29 06:23:34 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Nov 29 06:23:34 compute-0 sudo[101939]: pam_unix(sudo:session): session closed for user root
Nov 29 06:23:34 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:23:34 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:23:34 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:23:34.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:23:35 compute-0 sudo[102093]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hgwumxmattkoplyuharkvhsimityxpyb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397414.7821624-719-260325706254559/AnsiballZ_getent.py'
Nov 29 06:23:35 compute-0 sudo[102093]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:23:35 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 9.4 scrub starts
Nov 29 06:23:35 compute-0 python3.9[102095]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Nov 29 06:23:35 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:23:35 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:23:35 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:23:35.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:23:35 compute-0 sudo[102093]: pam_unix(sudo:session): session closed for user root
Nov 29 06:23:36 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v295: 305 pgs: 305 active+clean; 456 KiB data, 143 MiB used, 21 GiB / 21 GiB avail; 24 B/s, 1 objects/s recovering
Nov 29 06:23:36 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} v 0) v1
Nov 29 06:23:36 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Nov 29 06:23:36 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 9.c scrub starts
Nov 29 06:23:36 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:23:36 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:23:36 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:23:36.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:23:37 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 9.c scrub ok
Nov 29 06:23:37 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 9.4 scrub ok
Nov 29 06:23:37 compute-0 sudo[102247]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-heyibgstdwkplcbgpmejhrujufaqhkqg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397416.0216203-743-225766557499212/AnsiballZ_group.py'
Nov 29 06:23:37 compute-0 sudo[102247]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:23:37 compute-0 python3.9[102249]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 29 06:23:37 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Nov 29 06:23:37 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e100 e100: 3 total, 3 up, 3 in
Nov 29 06:23:37 compute-0 sudo[102247]: pam_unix(sudo:session): session closed for user root
Nov 29 06:23:37 compute-0 ceph-mon[74654]: 8.11 scrub starts
Nov 29 06:23:37 compute-0 ceph-mon[74654]: 8.11 scrub ok
Nov 29 06:23:37 compute-0 ceph-mon[74654]: osdmap e99: 3 total, 3 up, 3 in
Nov 29 06:23:37 compute-0 ceph-mon[74654]: 10.b scrub starts
Nov 29 06:23:37 compute-0 ceph-mon[74654]: pgmap v292: 305 pgs: 2 activating+remapped, 303 active+clean; 456 KiB data, 143 MiB used, 21 GiB / 21 GiB avail; 12 KiB/s rd, 255 B/s wr, 22 op/s; 12/214 objects misplaced (5.607%); 27 B/s, 1 objects/s recovering
Nov 29 06:23:37 compute-0 ceph-mon[74654]: 10.b scrub ok
Nov 29 06:23:37 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:23:37 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Nov 29 06:23:37 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e100: 3 total, 3 up, 3 in
Nov 29 06:23:37 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e100 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 06:23:37 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:23:37 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:23:37 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:23:37.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:23:37 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:23:37 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 06:23:38 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:23:38 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v297: 305 pgs: 2 active+clean+scrubbing, 303 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 27 B/s, 1 objects/s recovering
Nov 29 06:23:38 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} v 0) v1
Nov 29 06:23:38 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Nov 29 06:23:38 compute-0 sudo[102349]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:23:38 compute-0 sudo[102349]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:23:38 compute-0 sudo[102349]: pam_unix(sudo:session): session closed for user root
Nov 29 06:23:38 compute-0 sudo[102374]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:23:38 compute-0 sudo[102374]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:23:38 compute-0 sudo[102374]: pam_unix(sudo:session): session closed for user root
Nov 29 06:23:38 compute-0 sudo[102423]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:23:38 compute-0 sudo[102423]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:23:38 compute-0 sudo[102423]: pam_unix(sudo:session): session closed for user root
Nov 29 06:23:38 compute-0 sudo[102476]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dzpsrjpksvufhvzxmbiykjxbapjqebyq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397417.9244032-770-233499648618469/AnsiballZ_file.py'
Nov 29 06:23:38 compute-0 sudo[102476]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:23:38 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 100 pg[9.10( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=6 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=100 pruub=8.477513313s) [0] r=-1 lpr=100 pi=[58,100)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active pruub 311.193450928s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:23:38 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 100 pg[9.10( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=6 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=100 pruub=8.477451324s) [0] r=-1 lpr=100 pi=[58,100)/1 crt=56'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 311.193450928s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 06:23:38 compute-0 sudo[102474]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 29 06:23:38 compute-0 sudo[102474]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:23:38 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e100 do_prune osdmap full prune enabled
Nov 29 06:23:38 compute-0 ceph-mon[74654]: pgmap v293: 305 pgs: 305 active+clean; 456 KiB data, 143 MiB used, 21 GiB / 21 GiB avail; 27 B/s, 1 objects/s recovering
Nov 29 06:23:38 compute-0 ceph-mon[74654]: Reconfiguring mon.compute-2 (monmap changed)...
Nov 29 06:23:38 compute-0 ceph-mon[74654]: 9.2 scrub starts
Nov 29 06:23:38 compute-0 ceph-mon[74654]: 9.2 scrub ok
Nov 29 06:23:38 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 29 06:23:38 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Nov 29 06:23:38 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:23:38 compute-0 ceph-mon[74654]: Reconfiguring daemon mon.compute-2 on compute-2
Nov 29 06:23:38 compute-0 ceph-mon[74654]: 8.3 scrub starts
Nov 29 06:23:38 compute-0 ceph-mon[74654]: 10.c scrub starts
Nov 29 06:23:38 compute-0 ceph-mon[74654]: 8.3 scrub ok
Nov 29 06:23:38 compute-0 ceph-mon[74654]: 10.c scrub ok
Nov 29 06:23:38 compute-0 ceph-mon[74654]: pgmap v294: 305 pgs: 305 active+clean; 456 KiB data, 143 MiB used, 21 GiB / 21 GiB avail; 27 B/s, 1 objects/s recovering
Nov 29 06:23:38 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Nov 29 06:23:38 compute-0 ceph-mon[74654]: 10.12 scrub starts
Nov 29 06:23:38 compute-0 ceph-mon[74654]: 10.12 scrub ok
Nov 29 06:23:38 compute-0 ceph-mon[74654]: 10.d scrub starts
Nov 29 06:23:38 compute-0 ceph-mon[74654]: 10.d scrub ok
Nov 29 06:23:38 compute-0 ceph-mon[74654]: 9.4 scrub starts
Nov 29 06:23:38 compute-0 ceph-mon[74654]: pgmap v295: 305 pgs: 305 active+clean; 456 KiB data, 143 MiB used, 21 GiB / 21 GiB avail; 24 B/s, 1 objects/s recovering
Nov 29 06:23:38 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Nov 29 06:23:38 compute-0 ceph-mon[74654]: 9.c scrub starts
Nov 29 06:23:38 compute-0 ceph-mon[74654]: 9.c scrub ok
Nov 29 06:23:38 compute-0 ceph-mon[74654]: 9.4 scrub ok
Nov 29 06:23:38 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Nov 29 06:23:38 compute-0 ceph-mon[74654]: osdmap e100: 3 total, 3 up, 3 in
Nov 29 06:23:38 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:23:38 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:23:38 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Nov 29 06:23:38 compute-0 ceph-mon[74654]: 10.e scrub starts
Nov 29 06:23:38 compute-0 ceph-mon[74654]: 10.e scrub ok
Nov 29 06:23:38 compute-0 python3.9[102490]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Nov 29 06:23:38 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Nov 29 06:23:38 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Nov 29 06:23:38 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Nov 29 06:23:38 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e101 e101: 3 total, 3 up, 3 in
Nov 29 06:23:38 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e101: 3 total, 3 up, 3 in
Nov 29 06:23:38 compute-0 sudo[102476]: pam_unix(sudo:session): session closed for user root
Nov 29 06:23:38 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 101 pg[9.11( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=6 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=101 pruub=8.267313957s) [0] r=-1 lpr=101 pi=[58,101)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active pruub 311.193511963s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:23:38 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 101 pg[9.11( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=6 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=101 pruub=8.267251968s) [0] r=-1 lpr=101 pi=[58,101)/1 crt=56'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 311.193511963s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 06:23:38 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 101 pg[9.10( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=6 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=101) [0]/[1] r=0 lpr=101 pi=[58,101)/1 crt=56'1130 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:23:38 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 101 pg[9.10( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=6 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=101) [0]/[1] r=0 lpr=101 pi=[58,101)/1 crt=56'1130 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 06:23:38 compute-0 podman[102597]: 2025-11-29 06:23:38.792478206 +0000 UTC m=+0.065997659 container exec c3c8680245c67f710ba1b448e2d4c77c4c02bc368d31276f0332ad942957e3cf (image=quay.io/ceph/ceph:v18, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 29 06:23:38 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:23:38 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000030s ======
Nov 29 06:23:38 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:23:38.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Nov 29 06:23:38 compute-0 podman[102597]: 2025-11-29 06:23:38.903295191 +0000 UTC m=+0.176814644 container exec_died c3c8680245c67f710ba1b448e2d4c77c4c02bc368d31276f0332ad942957e3cf (image=quay.io/ceph/ceph:v18, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 29 06:23:39 compute-0 sudo[102654]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:23:39 compute-0 sudo[102654]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:23:39 compute-0 sudo[102654]: pam_unix(sudo:session): session closed for user root
Nov 29 06:23:39 compute-0 sudo[102695]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:23:39 compute-0 sudo[102695]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:23:39 compute-0 sudo[102695]: pam_unix(sudo:session): session closed for user root
Nov 29 06:23:39 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e101 do_prune osdmap full prune enabled
Nov 29 06:23:39 compute-0 podman[102828]: 2025-11-29 06:23:39.59110332 +0000 UTC m=+0.060506158 container exec f5b8edcc79df1f136246f04a71d5e10f6a214865dd4162430c1b6090267d988f (image=quay.io/ceph/haproxy:2.3, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-haproxy-rgw-default-compute-0-zzbnoj)
Nov 29 06:23:39 compute-0 podman[102828]: 2025-11-29 06:23:39.605169443 +0000 UTC m=+0.074572221 container exec_died f5b8edcc79df1f136246f04a71d5e10f6a214865dd4162430c1b6090267d988f (image=quay.io/ceph/haproxy:2.3, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-haproxy-rgw-default-compute-0-zzbnoj)
Nov 29 06:23:39 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 06:23:39 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:23:39 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:23:39 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:23:39.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:23:39 compute-0 podman[102945]: 2025-11-29 06:23:39.812514862 +0000 UTC m=+0.068386660 container exec c5da9d8380f0eb7ca78841b66eaacc1789ab9c8fb67eaab27657426fdf51bade (image=quay.io/ceph/keepalived:2.2.4, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-keepalived-rgw-default-compute-0-uyqrbs, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9, release=1793, version=2.2.4, architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, summary=Provides keepalived on RHEL 9 for Ceph., io.buildah.version=1.28.2, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived)
Nov 29 06:23:39 compute-0 podman[102945]: 2025-11-29 06:23:39.853477145 +0000 UTC m=+0.109348943 container exec_died c5da9d8380f0eb7ca78841b66eaacc1789ab9c8fb67eaab27657426fdf51bade (image=quay.io/ceph/keepalived:2.2.4, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-keepalived-rgw-default-compute-0-uyqrbs, version=2.2.4, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20, io.k8s.display-name=Keepalived on RHEL 9, io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, vendor=Red Hat, Inc., release=1793, vcs-type=git, com.redhat.component=keepalived-container, architecture=x86_64, io.openshift.tags=Ceph keepalived)
Nov 29 06:23:39 compute-0 sudo[103029]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ubvuovccxsxpeoeytdpzfeexdrbvutwq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397419.489746-803-255807606737801/AnsiballZ_dnf.py'
Nov 29 06:23:39 compute-0 sudo[103029]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:23:39 compute-0 sudo[102474]: pam_unix(sudo:session): session closed for user root
Nov 29 06:23:39 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 06:23:40 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v299: 305 pgs: 2 active+clean+scrubbing, 303 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:23:40 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} v 0) v1
Nov 29 06:23:40 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Nov 29 06:23:40 compute-0 python3.9[103031]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 06:23:40 compute-0 sshd-session[102909]: Invalid user deployer from 79.116.35.29 port 48248
Nov 29 06:23:40 compute-0 sshd-session[102909]: Received disconnect from 79.116.35.29 port 48248:11: Bye Bye [preauth]
Nov 29 06:23:40 compute-0 sshd-session[102909]: Disconnected from invalid user deployer 79.116.35.29 port 48248 [preauth]
Nov 29 06:23:40 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 9.12 scrub starts
Nov 29 06:23:40 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 9.12 scrub ok
Nov 29 06:23:40 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:23:40 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:23:40 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:23:40.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:23:41 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:23:41 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:23:41 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:23:41.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:23:42 compute-0 ceph-mon[74654]: pgmap v297: 305 pgs: 2 active+clean+scrubbing, 303 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 27 B/s, 1 objects/s recovering
Nov 29 06:23:42 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Nov 29 06:23:42 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Nov 29 06:23:42 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Nov 29 06:23:42 compute-0 ceph-mon[74654]: osdmap e101: 3 total, 3 up, 3 in
Nov 29 06:23:42 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v300: 305 pgs: 1 unknown, 1 remapped+peering, 303 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:23:42 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 06:23:42 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e102 e102: 3 total, 3 up, 3 in
Nov 29 06:23:42 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:23:42 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e102: 3 total, 3 up, 3 in
Nov 29 06:23:42 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 06:23:42 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 102 pg[9.11( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=6 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=102) [0]/[1] r=0 lpr=102 pi=[58,102)/1 crt=56'1130 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:23:42 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 102 pg[9.11( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=6 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=102) [0]/[1] r=0 lpr=102 pi=[58,102)/1 crt=56'1130 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 06:23:42 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:23:42 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 06:23:42 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 102 pg[9.10( v 56'1130 (0'0,56'1130] local-lis/les=101/102 n=6 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=101) [0]/[1] async=[0] r=0 lpr=101 pi=[58,101)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:23:42 compute-0 sudo[103029]: pam_unix(sudo:session): session closed for user root
Nov 29 06:23:42 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:23:42 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 06:23:42 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:23:42 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:23:42 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:23:42 compute-0 sudo[103044]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:23:42 compute-0 sudo[103044]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:23:42 compute-0 sudo[103044]: pam_unix(sudo:session): session closed for user root
Nov 29 06:23:42 compute-0 sudo[103083]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:23:42 compute-0 sudo[103083]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:23:42 compute-0 sudo[103083]: pam_unix(sudo:session): session closed for user root
Nov 29 06:23:42 compute-0 sudo[103108]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:23:42 compute-0 sudo[103108]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:23:42 compute-0 sudo[103108]: pam_unix(sudo:session): session closed for user root
Nov 29 06:23:42 compute-0 sudo[103133]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 06:23:42 compute-0 sudo[103133]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:23:42 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e102 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 06:23:42 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e102 do_prune osdmap full prune enabled
Nov 29 06:23:42 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Nov 29 06:23:42 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e103 e103: 3 total, 3 up, 3 in
Nov 29 06:23:42 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e103: 3 total, 3 up, 3 in
Nov 29 06:23:42 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 103 pg[9.12( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=4 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=103 pruub=11.911358833s) [0] r=-1 lpr=103 pi=[58,103)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active pruub 319.193572998s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:23:42 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 103 pg[9.12( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=4 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=103 pruub=11.911271095s) [0] r=-1 lpr=103 pi=[58,103)/1 crt=56'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 319.193572998s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 06:23:42 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 103 pg[9.10( v 56'1130 (0'0,56'1130] local-lis/les=101/102 n=6 ec=58/47 lis/c=101/58 les/c/f=102/59/0 sis=103 pruub=15.364741325s) [0] async=[0] r=-1 lpr=103 pi=[58,103)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active pruub 322.646942139s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:23:42 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 103 pg[9.10( v 56'1130 (0'0,56'1130] local-lis/les=101/102 n=6 ec=58/47 lis/c=101/58 les/c/f=102/59/0 sis=103 pruub=15.364190102s) [0] r=-1 lpr=103 pi=[58,103)/1 crt=56'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 322.646942139s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 06:23:42 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 103 pg[9.11( v 56'1130 (0'0,56'1130] local-lis/les=102/103 n=6 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=102) [0]/[1] async=[0] r=0 lpr=102 pi=[58,102)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:23:42 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:23:42 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:23:42 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:23:42.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:23:42 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 9.14 scrub starts
Nov 29 06:23:42 compute-0 sudo[103133]: pam_unix(sudo:session): session closed for user root
Nov 29 06:23:42 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 9.14 scrub ok
Nov 29 06:23:43 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 06:23:43 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:23:43 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 06:23:43 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 06:23:43 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 06:23:43 compute-0 ceph-mon[74654]: pgmap v299: 305 pgs: 2 active+clean+scrubbing, 303 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:23:43 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Nov 29 06:23:43 compute-0 ceph-mon[74654]: 10.1e scrub starts
Nov 29 06:23:43 compute-0 ceph-mon[74654]: 10.1e scrub ok
Nov 29 06:23:43 compute-0 ceph-mon[74654]: 9.12 scrub starts
Nov 29 06:23:43 compute-0 ceph-mon[74654]: 9.12 scrub ok
Nov 29 06:23:43 compute-0 ceph-mon[74654]: pgmap v300: 305 pgs: 1 unknown, 1 remapped+peering, 303 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:23:43 compute-0 ceph-mon[74654]: 11.13 scrub starts
Nov 29 06:23:43 compute-0 ceph-mon[74654]: 11.13 scrub ok
Nov 29 06:23:43 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:23:43 compute-0 ceph-mon[74654]: osdmap e102: 3 total, 3 up, 3 in
Nov 29 06:23:43 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:23:43 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:23:43 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:23:43 compute-0 ceph-mon[74654]: 10.16 deep-scrub starts
Nov 29 06:23:43 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:23:43 compute-0 ceph-mon[74654]: 10.16 deep-scrub ok
Nov 29 06:23:43 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:23:43 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Nov 29 06:23:43 compute-0 ceph-mon[74654]: osdmap e103: 3 total, 3 up, 3 in
Nov 29 06:23:43 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:23:43 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev f5ad9944-795b-4b49-8a18-ab9d102a0260 does not exist
Nov 29 06:23:43 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev c892b06b-5a02-440f-9c92-6d43c04c7a6b does not exist
Nov 29 06:23:43 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev a47d4639-2471-4d54-bd0f-cacc1daaa05d does not exist
Nov 29 06:23:43 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 06:23:43 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 06:23:43 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 06:23:43 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 06:23:43 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 06:23:43 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:23:43 compute-0 sudo[103213]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:23:43 compute-0 sudo[103213]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:23:43 compute-0 sudo[103213]: pam_unix(sudo:session): session closed for user root
Nov 29 06:23:43 compute-0 sudo[103267]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:23:43 compute-0 sudo[103267]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:23:43 compute-0 sudo[103267]: pam_unix(sudo:session): session closed for user root
Nov 29 06:23:43 compute-0 sudo[103304]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:23:43 compute-0 sudo[103304]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:23:43 compute-0 sudo[103304]: pam_unix(sudo:session): session closed for user root
Nov 29 06:23:43 compute-0 sudo[103340]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Nov 29 06:23:43 compute-0 sudo[103340]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:23:43 compute-0 sudo[103415]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hdqblwaawattzjtbhfgymdqibhjghaxv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397423.1224658-827-160582177298530/AnsiballZ_file.py'
Nov 29 06:23:43 compute-0 sudo[103415]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:23:43 compute-0 python3.9[103417]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 06:23:43 compute-0 sudo[103415]: pam_unix(sudo:session): session closed for user root
Nov 29 06:23:43 compute-0 podman[103459]: 2025-11-29 06:23:43.688944259 +0000 UTC m=+0.040814629 container create 71ec86b0fb0602e2e0571d1658145eddbec28f06201573e05ded0b4fe512c93e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_bell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 29 06:23:43 compute-0 systemd[1]: Started libpod-conmon-71ec86b0fb0602e2e0571d1658145eddbec28f06201573e05ded0b4fe512c93e.scope.
Nov 29 06:23:43 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:23:43 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:23:43 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:23:43.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:23:43 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:23:43 compute-0 podman[103459]: 2025-11-29 06:23:43.670359694 +0000 UTC m=+0.022230084 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:23:43 compute-0 podman[103459]: 2025-11-29 06:23:43.792239183 +0000 UTC m=+0.144109583 container init 71ec86b0fb0602e2e0571d1658145eddbec28f06201573e05ded0b4fe512c93e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_bell, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 06:23:43 compute-0 podman[103459]: 2025-11-29 06:23:43.799148426 +0000 UTC m=+0.151018796 container start 71ec86b0fb0602e2e0571d1658145eddbec28f06201573e05ded0b4fe512c93e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_bell, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 06:23:43 compute-0 podman[103459]: 2025-11-29 06:23:43.803085951 +0000 UTC m=+0.154956321 container attach 71ec86b0fb0602e2e0571d1658145eddbec28f06201573e05ded0b4fe512c93e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_bell, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 29 06:23:43 compute-0 agitated_bell[103493]: 167 167
Nov 29 06:23:43 compute-0 systemd[1]: libpod-71ec86b0fb0602e2e0571d1658145eddbec28f06201573e05ded0b4fe512c93e.scope: Deactivated successfully.
Nov 29 06:23:43 compute-0 podman[103459]: 2025-11-29 06:23:43.806211773 +0000 UTC m=+0.158082163 container died 71ec86b0fb0602e2e0571d1658145eddbec28f06201573e05ded0b4fe512c93e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_bell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 29 06:23:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-293d0eec92dfd52a619978722a5c91cbb549b524c75db94693551e452e74e565-merged.mount: Deactivated successfully.
Nov 29 06:23:43 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e103 do_prune osdmap full prune enabled
Nov 29 06:23:43 compute-0 podman[103459]: 2025-11-29 06:23:43.851682938 +0000 UTC m=+0.203553318 container remove 71ec86b0fb0602e2e0571d1658145eddbec28f06201573e05ded0b4fe512c93e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_bell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 29 06:23:43 compute-0 systemd[1]: libpod-conmon-71ec86b0fb0602e2e0571d1658145eddbec28f06201573e05ded0b4fe512c93e.scope: Deactivated successfully.
Nov 29 06:23:44 compute-0 podman[103524]: 2025-11-29 06:23:44.030499739 +0000 UTC m=+0.059886719 container create 1477d0cd0cca66dc33cb20eab36f9c3c9fbce36bccf779d2c92313346038194e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_chandrasekhar, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 29 06:23:44 compute-0 systemd[1]: Started libpod-conmon-1477d0cd0cca66dc33cb20eab36f9c3c9fbce36bccf779d2c92313346038194e.scope.
Nov 29 06:23:44 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v303: 305 pgs: 1 unknown, 1 remapped+peering, 303 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:23:44 compute-0 podman[103524]: 2025-11-29 06:23:44.001206058 +0000 UTC m=+0.030593108 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:23:44 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:23:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc0e482fc23608cb6dcb5bc30d5fc8dac05ead32b639ed9b7eef210e46dab12c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 06:23:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc0e482fc23608cb6dcb5bc30d5fc8dac05ead32b639ed9b7eef210e46dab12c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:23:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc0e482fc23608cb6dcb5bc30d5fc8dac05ead32b639ed9b7eef210e46dab12c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:23:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc0e482fc23608cb6dcb5bc30d5fc8dac05ead32b639ed9b7eef210e46dab12c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 06:23:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc0e482fc23608cb6dcb5bc30d5fc8dac05ead32b639ed9b7eef210e46dab12c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 06:23:44 compute-0 podman[103524]: 2025-11-29 06:23:44.118152273 +0000 UTC m=+0.147539273 container init 1477d0cd0cca66dc33cb20eab36f9c3c9fbce36bccf779d2c92313346038194e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_chandrasekhar, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 29 06:23:44 compute-0 podman[103524]: 2025-11-29 06:23:44.12554297 +0000 UTC m=+0.154929950 container start 1477d0cd0cca66dc33cb20eab36f9c3c9fbce36bccf779d2c92313346038194e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_chandrasekhar, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 06:23:44 compute-0 podman[103524]: 2025-11-29 06:23:44.129060383 +0000 UTC m=+0.158447383 container attach 1477d0cd0cca66dc33cb20eab36f9c3c9fbce36bccf779d2c92313346038194e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_chandrasekhar, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 06:23:44 compute-0 sudo[103671]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zwvenhwxtrcglbnxrooxrhekakyqzumi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397424.0443985-851-264560869888385/AnsiballZ_stat.py'
Nov 29 06:23:44 compute-0 sudo[103671]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:23:44 compute-0 python3.9[103673]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:23:44 compute-0 sudo[103671]: pam_unix(sudo:session): session closed for user root
Nov 29 06:23:44 compute-0 sudo[103750]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-murxtucmftjjtutbfywdjtxhkqmbxlxm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397424.0443985-851-264560869888385/AnsiballZ_file.py'
Nov 29 06:23:44 compute-0 sudo[103750]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:23:44 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:23:44 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:23:44 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:23:44.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:23:44 compute-0 unruffled_chandrasekhar[103564]: --> passed data devices: 0 physical, 1 LVM
Nov 29 06:23:44 compute-0 unruffled_chandrasekhar[103564]: --> relative data size: 1.0
Nov 29 06:23:44 compute-0 unruffled_chandrasekhar[103564]: --> All data devices are unavailable
Nov 29 06:23:45 compute-0 systemd[1]: libpod-1477d0cd0cca66dc33cb20eab36f9c3c9fbce36bccf779d2c92313346038194e.scope: Deactivated successfully.
Nov 29 06:23:45 compute-0 podman[103524]: 2025-11-29 06:23:45.027529798 +0000 UTC m=+1.056916768 container died 1477d0cd0cca66dc33cb20eab36f9c3c9fbce36bccf779d2c92313346038194e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_chandrasekhar, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 06:23:45 compute-0 python3.9[103755]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 06:23:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-fc0e482fc23608cb6dcb5bc30d5fc8dac05ead32b639ed9b7eef210e46dab12c-merged.mount: Deactivated successfully.
Nov 29 06:23:45 compute-0 sudo[103750]: pam_unix(sudo:session): session closed for user root
Nov 29 06:23:45 compute-0 podman[103524]: 2025-11-29 06:23:45.086510781 +0000 UTC m=+1.115897801 container remove 1477d0cd0cca66dc33cb20eab36f9c3c9fbce36bccf779d2c92313346038194e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_chandrasekhar, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 06:23:45 compute-0 systemd[1]: libpod-conmon-1477d0cd0cca66dc33cb20eab36f9c3c9fbce36bccf779d2c92313346038194e.scope: Deactivated successfully.
Nov 29 06:23:45 compute-0 sudo[103340]: pam_unix(sudo:session): session closed for user root
Nov 29 06:23:45 compute-0 sudo[103777]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:23:45 compute-0 sudo[103777]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:23:45 compute-0 sudo[103777]: pam_unix(sudo:session): session closed for user root
Nov 29 06:23:45 compute-0 sudo[103826]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:23:45 compute-0 sudo[103826]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:23:45 compute-0 sudo[103826]: pam_unix(sudo:session): session closed for user root
Nov 29 06:23:45 compute-0 sudo[103851]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:23:45 compute-0 sudo[103851]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:23:45 compute-0 sudo[103851]: pam_unix(sudo:session): session closed for user root
Nov 29 06:23:45 compute-0 sudo[103876]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -- lvm list --format json
Nov 29 06:23:45 compute-0 sudo[103876]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:23:45 compute-0 podman[103940]: 2025-11-29 06:23:45.630011942 +0000 UTC m=+0.050375961 container create 8ebb4f9a189399445786068fd9a0579872c3e57963c9b09fffbaaf412bcf53ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_feynman, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 06:23:45 compute-0 systemd[1]: Started libpod-conmon-8ebb4f9a189399445786068fd9a0579872c3e57963c9b09fffbaaf412bcf53ba.scope.
Nov 29 06:23:45 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:23:45 compute-0 podman[103940]: 2025-11-29 06:23:45.605572274 +0000 UTC m=+0.025936373 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:23:45 compute-0 podman[103940]: 2025-11-29 06:23:45.716425129 +0000 UTC m=+0.136789228 container init 8ebb4f9a189399445786068fd9a0579872c3e57963c9b09fffbaaf412bcf53ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_feynman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 06:23:45 compute-0 podman[103940]: 2025-11-29 06:23:45.723923939 +0000 UTC m=+0.144287958 container start 8ebb4f9a189399445786068fd9a0579872c3e57963c9b09fffbaaf412bcf53ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_feynman, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 29 06:23:45 compute-0 podman[103940]: 2025-11-29 06:23:45.727654379 +0000 UTC m=+0.148018418 container attach 8ebb4f9a189399445786068fd9a0579872c3e57963c9b09fffbaaf412bcf53ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_feynman, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Nov 29 06:23:45 compute-0 systemd[1]: libpod-8ebb4f9a189399445786068fd9a0579872c3e57963c9b09fffbaaf412bcf53ba.scope: Deactivated successfully.
Nov 29 06:23:45 compute-0 jovial_feynman[103957]: 167 167
Nov 29 06:23:45 compute-0 conmon[103957]: conmon 8ebb4f9a189399445786 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8ebb4f9a189399445786068fd9a0579872c3e57963c9b09fffbaaf412bcf53ba.scope/container/memory.events
Nov 29 06:23:45 compute-0 podman[103940]: 2025-11-29 06:23:45.731145581 +0000 UTC m=+0.151509630 container died 8ebb4f9a189399445786068fd9a0579872c3e57963c9b09fffbaaf412bcf53ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_feynman, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 29 06:23:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-b67e681f0ca82b488ec9acdd24020114212542b7baa7b3543d58d8619f3105ef-merged.mount: Deactivated successfully.
Nov 29 06:23:45 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:23:45 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:23:45 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:23:45.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:23:45 compute-0 podman[103940]: 2025-11-29 06:23:45.779735868 +0000 UTC m=+0.200099907 container remove 8ebb4f9a189399445786068fd9a0579872c3e57963c9b09fffbaaf412bcf53ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_feynman, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 06:23:45 compute-0 systemd[1]: libpod-conmon-8ebb4f9a189399445786068fd9a0579872c3e57963c9b09fffbaaf412bcf53ba.scope: Deactivated successfully.
Nov 29 06:23:45 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 9.1c scrub starts
Nov 29 06:23:45 compute-0 podman[104027]: 2025-11-29 06:23:45.984986246 +0000 UTC m=+0.040781209 container create fef7d7570a47faedf2f06bb84d6aee73af55a01f297cad076b8c4f4be121dcec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_boyd, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Nov 29 06:23:46 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 9.1c scrub ok
Nov 29 06:23:46 compute-0 systemd[1]: Started libpod-conmon-fef7d7570a47faedf2f06bb84d6aee73af55a01f297cad076b8c4f4be121dcec.scope.
Nov 29 06:23:46 compute-0 podman[104027]: 2025-11-29 06:23:45.966561755 +0000 UTC m=+0.022356758 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:23:46 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:23:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e43804128a90e7169ee269febe1ee3d6ca4c36e8a0bce3f4ee60d0807efe3725/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 06:23:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e43804128a90e7169ee269febe1ee3d6ca4c36e8a0bce3f4ee60d0807efe3725/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:23:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e43804128a90e7169ee269febe1ee3d6ca4c36e8a0bce3f4ee60d0807efe3725/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:23:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e43804128a90e7169ee269febe1ee3d6ca4c36e8a0bce3f4ee60d0807efe3725/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 06:23:46 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v304: 305 pgs: 1 unknown, 1 remapped+peering, 303 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:23:46 compute-0 podman[104027]: 2025-11-29 06:23:46.092504223 +0000 UTC m=+0.148299276 container init fef7d7570a47faedf2f06bb84d6aee73af55a01f297cad076b8c4f4be121dcec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_boyd, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 06:23:46 compute-0 podman[104027]: 2025-11-29 06:23:46.105546077 +0000 UTC m=+0.161341080 container start fef7d7570a47faedf2f06bb84d6aee73af55a01f297cad076b8c4f4be121dcec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_boyd, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 06:23:46 compute-0 podman[104027]: 2025-11-29 06:23:46.110238094 +0000 UTC m=+0.166033107 container attach fef7d7570a47faedf2f06bb84d6aee73af55a01f297cad076b8c4f4be121dcec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_boyd, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 06:23:46 compute-0 sudo[104127]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-arhptzfhhbiuzkcbojawrsyadldrddmk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397425.8881197-890-224710392925381/AnsiballZ_stat.py'
Nov 29 06:23:46 compute-0 sudo[104127]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:23:46 compute-0 python3.9[104129]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:23:46 compute-0 sudo[104127]: pam_unix(sudo:session): session closed for user root
Nov 29 06:23:46 compute-0 sudo[104207]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tnxiqsxvruoifhxyyybzvpmogamnygfm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397425.8881197-890-224710392925381/AnsiballZ_file.py'
Nov 29 06:23:46 compute-0 sudo[104207]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:23:46 compute-0 zen_boyd[104072]: {
Nov 29 06:23:46 compute-0 zen_boyd[104072]:     "1": [
Nov 29 06:23:46 compute-0 zen_boyd[104072]:         {
Nov 29 06:23:46 compute-0 zen_boyd[104072]:             "devices": [
Nov 29 06:23:46 compute-0 zen_boyd[104072]:                 "/dev/loop3"
Nov 29 06:23:46 compute-0 zen_boyd[104072]:             ],
Nov 29 06:23:46 compute-0 zen_boyd[104072]:             "lv_name": "ceph_lv0",
Nov 29 06:23:46 compute-0 zen_boyd[104072]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 06:23:46 compute-0 zen_boyd[104072]:             "lv_size": "7511998464",
Nov 29 06:23:46 compute-0 zen_boyd[104072]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=336ec58c-893b-528f-a0c1-6ed1196bc047,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=91f280f1-e534-4adc-bf70-98711580c2dd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 06:23:46 compute-0 zen_boyd[104072]:             "lv_uuid": "G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP",
Nov 29 06:23:46 compute-0 zen_boyd[104072]:             "name": "ceph_lv0",
Nov 29 06:23:46 compute-0 zen_boyd[104072]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 06:23:46 compute-0 zen_boyd[104072]:             "tags": {
Nov 29 06:23:46 compute-0 zen_boyd[104072]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 06:23:46 compute-0 zen_boyd[104072]:                 "ceph.block_uuid": "G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP",
Nov 29 06:23:46 compute-0 zen_boyd[104072]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 06:23:46 compute-0 zen_boyd[104072]:                 "ceph.cluster_fsid": "336ec58c-893b-528f-a0c1-6ed1196bc047",
Nov 29 06:23:46 compute-0 zen_boyd[104072]:                 "ceph.cluster_name": "ceph",
Nov 29 06:23:46 compute-0 zen_boyd[104072]:                 "ceph.crush_device_class": "",
Nov 29 06:23:46 compute-0 zen_boyd[104072]:                 "ceph.encrypted": "0",
Nov 29 06:23:46 compute-0 zen_boyd[104072]:                 "ceph.osd_fsid": "91f280f1-e534-4adc-bf70-98711580c2dd",
Nov 29 06:23:46 compute-0 zen_boyd[104072]:                 "ceph.osd_id": "1",
Nov 29 06:23:46 compute-0 zen_boyd[104072]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 06:23:46 compute-0 zen_boyd[104072]:                 "ceph.type": "block",
Nov 29 06:23:46 compute-0 zen_boyd[104072]:                 "ceph.vdo": "0"
Nov 29 06:23:46 compute-0 zen_boyd[104072]:             },
Nov 29 06:23:46 compute-0 zen_boyd[104072]:             "type": "block",
Nov 29 06:23:46 compute-0 zen_boyd[104072]:             "vg_name": "ceph_vg0"
Nov 29 06:23:46 compute-0 zen_boyd[104072]:         }
Nov 29 06:23:46 compute-0 zen_boyd[104072]:     ]
Nov 29 06:23:46 compute-0 zen_boyd[104072]: }
Nov 29 06:23:46 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:23:46 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:23:46 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:23:46.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:23:46 compute-0 systemd[1]: libpod-fef7d7570a47faedf2f06bb84d6aee73af55a01f297cad076b8c4f4be121dcec.scope: Deactivated successfully.
Nov 29 06:23:46 compute-0 podman[104027]: 2025-11-29 06:23:46.932710721 +0000 UTC m=+0.988505714 container died fef7d7570a47faedf2f06bb84d6aee73af55a01f297cad076b8c4f4be121dcec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_boyd, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 29 06:23:47 compute-0 python3.9[104209]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 06:23:47 compute-0 sudo[104207]: pam_unix(sudo:session): session closed for user root
Nov 29 06:23:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-e43804128a90e7169ee269febe1ee3d6ca4c36e8a0bce3f4ee60d0807efe3725-merged.mount: Deactivated successfully.
Nov 29 06:23:47 compute-0 podman[104027]: 2025-11-29 06:23:47.115256283 +0000 UTC m=+1.171051246 container remove fef7d7570a47faedf2f06bb84d6aee73af55a01f297cad076b8c4f4be121dcec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_boyd, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 29 06:23:47 compute-0 systemd[1]: libpod-conmon-fef7d7570a47faedf2f06bb84d6aee73af55a01f297cad076b8c4f4be121dcec.scope: Deactivated successfully.
Nov 29 06:23:47 compute-0 sudo[103876]: pam_unix(sudo:session): session closed for user root
Nov 29 06:23:47 compute-0 sudo[104251]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:23:47 compute-0 sudo[104251]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:23:47 compute-0 sudo[104251]: pam_unix(sudo:session): session closed for user root
Nov 29 06:23:47 compute-0 sudo[104276]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:23:47 compute-0 sudo[104276]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:23:47 compute-0 sudo[104276]: pam_unix(sudo:session): session closed for user root
Nov 29 06:23:47 compute-0 sudo[104301]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:23:47 compute-0 sudo[104301]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:23:47 compute-0 sudo[104301]: pam_unix(sudo:session): session closed for user root
Nov 29 06:23:47 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e104 e104: 3 total, 3 up, 3 in
Nov 29 06:23:47 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e104: 3 total, 3 up, 3 in
Nov 29 06:23:47 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 104 pg[9.12( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=4 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=104) [0]/[1] r=0 lpr=104 pi=[58,104)/1 crt=56'1130 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:23:47 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 104 pg[9.11( v 56'1130 (0'0,56'1130] local-lis/les=102/103 n=6 ec=58/47 lis/c=102/58 les/c/f=103/59/0 sis=104 pruub=11.483637810s) [0] async=[0] r=-1 lpr=104 pi=[58,104)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active pruub 323.288909912s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:23:47 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 104 pg[9.12( v 56'1130 (0'0,56'1130] local-lis/les=58/59 n=4 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=104) [0]/[1] r=0 lpr=104 pi=[58,104)/1 crt=56'1130 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 06:23:47 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 104 pg[9.11( v 56'1130 (0'0,56'1130] local-lis/les=102/103 n=6 ec=58/47 lis/c=102/58 les/c/f=103/59/0 sis=104 pruub=11.483239174s) [0] r=-1 lpr=104 pi=[58,104)/1 crt=56'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 323.288909912s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 06:23:47 compute-0 sudo[104326]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -- raw list --format json
Nov 29 06:23:47 compute-0 sudo[104326]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:23:47 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:23:47 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:23:47 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:23:47.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:23:47 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e104 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 06:23:47 compute-0 podman[104470]: 2025-11-29 06:23:47.85736643 +0000 UTC m=+0.058017627 container create e0a2b7fb25562f7e770c3104e71e5f83bd32ce81c70e908bc814021a7a33449a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_einstein, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 29 06:23:47 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 11.2 scrub starts
Nov 29 06:23:47 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 11.2 scrub ok
Nov 29 06:23:47 compute-0 systemd[1]: Started libpod-conmon-e0a2b7fb25562f7e770c3104e71e5f83bd32ce81c70e908bc814021a7a33449a.scope.
Nov 29 06:23:47 compute-0 podman[104470]: 2025-11-29 06:23:47.830353294 +0000 UTC m=+0.031004571 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:23:47 compute-0 sudo[104531]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmfiofvixjhnnrqojnfpzunvoivxtgkt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397427.5590692-935-74962023355416/AnsiballZ_dnf.py'
Nov 29 06:23:47 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:23:47 compute-0 sudo[104531]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:23:48 compute-0 podman[104470]: 2025-11-29 06:23:48.010872677 +0000 UTC m=+0.211523944 container init e0a2b7fb25562f7e770c3104e71e5f83bd32ce81c70e908bc814021a7a33449a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_einstein, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 06:23:48 compute-0 podman[104470]: 2025-11-29 06:23:48.022838741 +0000 UTC m=+0.223489978 container start e0a2b7fb25562f7e770c3104e71e5f83bd32ce81c70e908bc814021a7a33449a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_einstein, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 06:23:48 compute-0 tender_einstein[104532]: 167 167
Nov 29 06:23:48 compute-0 systemd[1]: libpod-e0a2b7fb25562f7e770c3104e71e5f83bd32ce81c70e908bc814021a7a33449a.scope: Deactivated successfully.
Nov 29 06:23:48 compute-0 podman[104470]: 2025-11-29 06:23:48.037015468 +0000 UTC m=+0.237666745 container attach e0a2b7fb25562f7e770c3104e71e5f83bd32ce81c70e908bc814021a7a33449a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_einstein, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 06:23:48 compute-0 podman[104470]: 2025-11-29 06:23:48.038374287 +0000 UTC m=+0.239025524 container died e0a2b7fb25562f7e770c3104e71e5f83bd32ce81c70e908bc814021a7a33449a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_einstein, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 29 06:23:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-9003ef961812caaec732846723c274499f48864042770d704a520677fcd4bba3-merged.mount: Deactivated successfully.
Nov 29 06:23:48 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v306: 305 pgs: 1 unknown, 1 active+remapped, 1 peering, 302 active+clean; 455 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 18 B/s, 1 objects/s recovering
Nov 29 06:23:48 compute-0 python3.9[104536]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 06:23:48 compute-0 ceph-mon[74654]: 9.14 scrub starts
Nov 29 06:23:48 compute-0 ceph-mon[74654]: 9.14 scrub ok
Nov 29 06:23:48 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:23:48 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 06:23:48 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:23:48 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 06:23:48 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 06:23:48 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:23:48 compute-0 ceph-mon[74654]: 10.17 deep-scrub starts
Nov 29 06:23:48 compute-0 ceph-mon[74654]: 10.17 deep-scrub ok
Nov 29 06:23:48 compute-0 podman[104470]: 2025-11-29 06:23:48.334125888 +0000 UTC m=+0.534777095 container remove e0a2b7fb25562f7e770c3104e71e5f83bd32ce81c70e908bc814021a7a33449a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_einstein, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 06:23:48 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e104 do_prune osdmap full prune enabled
Nov 29 06:23:48 compute-0 systemd[1]: libpod-conmon-e0a2b7fb25562f7e770c3104e71e5f83bd32ce81c70e908bc814021a7a33449a.scope: Deactivated successfully.
Nov 29 06:23:48 compute-0 podman[104562]: 2025-11-29 06:23:48.493131614 +0000 UTC m=+0.037219920 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:23:48 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:23:48 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:23:48 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:23:48.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:23:49 compute-0 podman[104562]: 2025-11-29 06:23:49.0885564 +0000 UTC m=+0.632644696 container create c8e2c0de8d45b275dfe3a29a2e288620cc932118c2cdeba9e7622f57b6edaa19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_greider, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 06:23:49 compute-0 sshd-session[104537]: Invalid user radarr from 103.147.159.91 port 52842
Nov 29 06:23:49 compute-0 sshd-session[104537]: Received disconnect from 103.147.159.91 port 52842:11: Bye Bye [preauth]
Nov 29 06:23:49 compute-0 sshd-session[104537]: Disconnected from invalid user radarr 103.147.159.91 port 52842 [preauth]
Nov 29 06:23:49 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:23:49 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:23:49 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:23:49.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:23:49 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e105 e105: 3 total, 3 up, 3 in
Nov 29 06:23:50 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v307: 305 pgs: 1 unknown, 1 active+remapped, 1 peering, 302 active+clean; 455 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 13 B/s, 0 objects/s recovering
Nov 29 06:23:50 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:23:50 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:23:50 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:23:50.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:23:51 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e105: 3 total, 3 up, 3 in
Nov 29 06:23:51 compute-0 systemd[1]: Started libpod-conmon-c8e2c0de8d45b275dfe3a29a2e288620cc932118c2cdeba9e7622f57b6edaa19.scope.
Nov 29 06:23:51 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:23:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc5b29e6cd452a931580e8020b6cbe1b57de5ce94ed949805ba37a611d74918d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 06:23:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc5b29e6cd452a931580e8020b6cbe1b57de5ce94ed949805ba37a611d74918d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:23:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc5b29e6cd452a931580e8020b6cbe1b57de5ce94ed949805ba37a611d74918d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:23:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc5b29e6cd452a931580e8020b6cbe1b57de5ce94ed949805ba37a611d74918d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 06:23:51 compute-0 sudo[104531]: pam_unix(sudo:session): session closed for user root
Nov 29 06:23:51 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:23:51 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:23:51 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:23:51.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:23:52 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v309: 305 pgs: 1 activating+remapped, 304 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 4/219 objects misplaced (1.826%); 13 B/s, 0 objects/s recovering
Nov 29 06:23:52 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 11.6 scrub starts
Nov 29 06:23:52 compute-0 python3.9[104732]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 06:23:52 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:23:52 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:23:52 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:23:52.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:23:53 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:23:53 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:23:53 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:23:53.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:23:54 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v310: 305 pgs: 1 activating+remapped, 304 active+clean; 455 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 4/219 objects misplaced (1.826%); 13 B/s, 0 objects/s recovering
Nov 29 06:23:54 compute-0 ceph-mgr[74948]: [balancer INFO root] Optimize plan auto_2025-11-29_06:23:54
Nov 29 06:23:54 compute-0 ceph-mgr[74948]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 06:23:54 compute-0 ceph-mgr[74948]: [balancer INFO root] Some PGs (0.003279) are inactive; try again later
Nov 29 06:23:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:23:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:23:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:23:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:23:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:23:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:23:54 compute-0 python3.9[104885]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Nov 29 06:23:54 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 11.9 scrub starts
Nov 29 06:23:54 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:23:54 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:23:54 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:23:54.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:23:55 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e105 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 06:23:55 compute-0 ceph-mon[74654]: pgmap v303: 305 pgs: 1 unknown, 1 remapped+peering, 303 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:23:55 compute-0 ceph-mon[74654]: 10.1a scrub starts
Nov 29 06:23:55 compute-0 ceph-mon[74654]: 10.1a scrub ok
Nov 29 06:23:55 compute-0 ceph-mon[74654]: 9.1c scrub starts
Nov 29 06:23:55 compute-0 ceph-mon[74654]: 9.1c scrub ok
Nov 29 06:23:55 compute-0 ceph-mon[74654]: pgmap v304: 305 pgs: 1 unknown, 1 remapped+peering, 303 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:23:55 compute-0 ceph-mon[74654]: osdmap e104: 3 total, 3 up, 3 in
Nov 29 06:23:55 compute-0 ceph-mon[74654]: 11.2 scrub starts
Nov 29 06:23:55 compute-0 ceph-mon[74654]: 11.2 scrub ok
Nov 29 06:23:55 compute-0 ceph-mon[74654]: pgmap v306: 305 pgs: 1 unknown, 1 active+remapped, 1 peering, 302 active+clean; 455 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 18 B/s, 1 objects/s recovering
Nov 29 06:23:55 compute-0 ceph-mon[74654]: 10.1c scrub starts
Nov 29 06:23:55 compute-0 ceph-mon[74654]: 10.1c scrub ok
Nov 29 06:23:55 compute-0 podman[104562]: 2025-11-29 06:23:55.30964532 +0000 UTC m=+6.853733706 container init c8e2c0de8d45b275dfe3a29a2e288620cc932118c2cdeba9e7622f57b6edaa19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_greider, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 29 06:23:55 compute-0 podman[104562]: 2025-11-29 06:23:55.320000047 +0000 UTC m=+6.864088343 container start c8e2c0de8d45b275dfe3a29a2e288620cc932118c2cdeba9e7622f57b6edaa19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_greider, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 29 06:23:55 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 105 pg[9.12( v 56'1130 (0'0,56'1130] local-lis/les=104/105 n=4 ec=58/47 lis/c=58/58 les/c/f=59/59/0 sis=104) [0]/[1] async=[0] r=0 lpr=104 pi=[58,104)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:23:55 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 11.6 scrub ok
Nov 29 06:23:55 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 11.9 scrub ok
Nov 29 06:23:55 compute-0 podman[104562]: 2025-11-29 06:23:55.387143895 +0000 UTC m=+6.931232191 container attach c8e2c0de8d45b275dfe3a29a2e288620cc932118c2cdeba9e7622f57b6edaa19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_greider, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 29 06:23:55 compute-0 python3.9[105036]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 06:23:55 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:23:55 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:23:55 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:23:55.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:23:56 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v311: 305 pgs: 1 activating+remapped, 304 active+clean; 455 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 4/219 objects misplaced (1.826%)
Nov 29 06:23:56 compute-0 funny_greider[104580]: {
Nov 29 06:23:56 compute-0 funny_greider[104580]:     "91f280f1-e534-4adc-bf70-98711580c2dd": {
Nov 29 06:23:56 compute-0 funny_greider[104580]:         "ceph_fsid": "336ec58c-893b-528f-a0c1-6ed1196bc047",
Nov 29 06:23:56 compute-0 funny_greider[104580]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 06:23:56 compute-0 funny_greider[104580]:         "osd_id": 1,
Nov 29 06:23:56 compute-0 funny_greider[104580]:         "osd_uuid": "91f280f1-e534-4adc-bf70-98711580c2dd",
Nov 29 06:23:56 compute-0 funny_greider[104580]:         "type": "bluestore"
Nov 29 06:23:56 compute-0 funny_greider[104580]:     }
Nov 29 06:23:56 compute-0 funny_greider[104580]: }
Nov 29 06:23:56 compute-0 systemd[1]: libpod-c8e2c0de8d45b275dfe3a29a2e288620cc932118c2cdeba9e7622f57b6edaa19.scope: Deactivated successfully.
Nov 29 06:23:56 compute-0 podman[104562]: 2025-11-29 06:23:56.16119446 +0000 UTC m=+7.705282776 container died c8e2c0de8d45b275dfe3a29a2e288620cc932118c2cdeba9e7622f57b6edaa19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_greider, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 06:23:56 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e105 do_prune osdmap full prune enabled
Nov 29 06:23:56 compute-0 ceph-mon[74654]: 10.1d scrub starts
Nov 29 06:23:56 compute-0 ceph-mon[74654]: 10.1d scrub ok
Nov 29 06:23:56 compute-0 ceph-mon[74654]: pgmap v307: 305 pgs: 1 unknown, 1 active+remapped, 1 peering, 302 active+clean; 455 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 13 B/s, 0 objects/s recovering
Nov 29 06:23:56 compute-0 ceph-mon[74654]: osdmap e105: 3 total, 3 up, 3 in
Nov 29 06:23:56 compute-0 ceph-mon[74654]: 10.1f scrub starts
Nov 29 06:23:56 compute-0 ceph-mon[74654]: 10.1f scrub ok
Nov 29 06:23:56 compute-0 ceph-mon[74654]: 8.1c scrub starts
Nov 29 06:23:56 compute-0 ceph-mon[74654]: 8.1c scrub ok
Nov 29 06:23:56 compute-0 ceph-mon[74654]: pgmap v309: 305 pgs: 1 activating+remapped, 304 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 4/219 objects misplaced (1.826%); 13 B/s, 0 objects/s recovering
Nov 29 06:23:56 compute-0 ceph-mon[74654]: 11.6 scrub starts
Nov 29 06:23:56 compute-0 ceph-mon[74654]: 8.1f scrub starts
Nov 29 06:23:56 compute-0 ceph-mon[74654]: 8.1f scrub ok
Nov 29 06:23:56 compute-0 ceph-mon[74654]: pgmap v310: 305 pgs: 1 activating+remapped, 304 active+clean; 455 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 4/219 objects misplaced (1.826%); 13 B/s, 0 objects/s recovering
Nov 29 06:23:56 compute-0 ceph-mon[74654]: 11.9 scrub starts
Nov 29 06:23:56 compute-0 ceph-mon[74654]: 8.c scrub starts
Nov 29 06:23:56 compute-0 ceph-mon[74654]: 8.c scrub ok
Nov 29 06:23:56 compute-0 ceph-mon[74654]: 11.6 scrub ok
Nov 29 06:23:56 compute-0 ceph-mon[74654]: 11.9 scrub ok
Nov 29 06:23:56 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 11.b scrub starts
Nov 29 06:23:56 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 11.b scrub ok
Nov 29 06:23:56 compute-0 sudo[105216]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-trkhmqofdlqpcvaxybmllxgatmnthlpr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397436.1279993-1058-164499443080997/AnsiballZ_systemd.py'
Nov 29 06:23:56 compute-0 sudo[105216]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:23:56 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:23:56 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:23:56 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:23:56.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:23:57 compute-0 python3.9[105219]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 06:23:57 compute-0 systemd[1]: Stopping Dynamic System Tuning Daemon...
Nov 29 06:23:57 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 11.c scrub starts
Nov 29 06:23:57 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:23:57 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:23:57 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:23:57.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:23:58 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v312: 305 pgs: 1 active+remapped, 304 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 10 B/s, 0 objects/s recovering
Nov 29 06:23:58 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 11.c scrub ok
Nov 29 06:23:58 compute-0 systemd[1]: tuned.service: Deactivated successfully.
Nov 29 06:23:58 compute-0 systemd[1]: Stopped Dynamic System Tuning Daemon.
Nov 29 06:23:58 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Nov 29 06:23:58 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:23:58 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:23:58 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:23:58.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:23:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-fc5b29e6cd452a931580e8020b6cbe1b57de5ce94ed949805ba37a611d74918d-merged.mount: Deactivated successfully.
Nov 29 06:23:59 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} v 0) v1
Nov 29 06:23:59 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Nov 29 06:23:59 compute-0 sudo[105232]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:23:59 compute-0 sudo[105232]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:23:59 compute-0 sudo[105232]: pam_unix(sudo:session): session closed for user root
Nov 29 06:23:59 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Nov 29 06:23:59 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e106 e106: 3 total, 3 up, 3 in
Nov 29 06:23:59 compute-0 sudo[105257]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:23:59 compute-0 sudo[105257]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:23:59 compute-0 sudo[105257]: pam_unix(sudo:session): session closed for user root
Nov 29 06:23:59 compute-0 sudo[105216]: pam_unix(sudo:session): session closed for user root
Nov 29 06:23:59 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:23:59 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:23:59 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:23:59.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:24:00 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v313: 305 pgs: 1 active+remapped, 304 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 10 B/s, 0 objects/s recovering
Nov 29 06:24:00 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e106: 3 total, 3 up, 3 in
Nov 29 06:24:00 compute-0 python3.9[105432]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Nov 29 06:24:00 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 11.d scrub starts
Nov 29 06:24:00 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:24:00 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:24:00 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:24:00.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:24:01 compute-0 anacron[30913]: Job `cron.daily' started
Nov 29 06:24:01 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:24:01 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:24:01 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:24:01.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:24:02 compute-0 anacron[30913]: Job `cron.daily' terminated
Nov 29 06:24:02 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v315: 305 pgs: 1 active+remapped, 304 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 10 B/s, 0 objects/s recovering
Nov 29 06:24:02 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 11.d scrub ok
Nov 29 06:24:02 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 06:24:02 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e106 do_prune osdmap full prune enabled
Nov 29 06:24:02 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 11.10 scrub starts
Nov 29 06:24:02 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:24:02 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:24:02 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:24:02.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:24:03 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} v 0) v1
Nov 29 06:24:03 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Nov 29 06:24:03 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} v 0) v1
Nov 29 06:24:03 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Nov 29 06:24:03 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 106 pg[9.12( v 56'1130 (0'0,56'1130] local-lis/les=104/105 n=4 ec=58/47 lis/c=104/58 les/c/f=105/59/0 sis=106 pruub=8.048233032s) [0] async=[0] r=-1 lpr=106 pi=[58,106)/1 crt=56'1130 lcod 0'0 mlcod 0'0 active pruub 335.750701904s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:24:03 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 106 pg[9.12( v 56'1130 (0'0,56'1130] local-lis/les=104/105 n=4 ec=58/47 lis/c=104/58 les/c/f=105/59/0 sis=106 pruub=8.047649384s) [0] r=-1 lpr=106 pi=[58,106)/1 crt=56'1130 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 335.750701904s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 06:24:03 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 11.10 scrub ok
Nov 29 06:24:03 compute-0 sshd-session[105464]: Received disconnect from 162.214.92.14 port 44870:11: Bye Bye [preauth]
Nov 29 06:24:03 compute-0 sshd-session[105464]: Disconnected from authenticating user root 162.214.92.14 port 44870 [preauth]
Nov 29 06:24:03 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:24:03 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:24:03 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:24:03.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:24:03 compute-0 ceph-mon[74654]: 11.17 scrub starts
Nov 29 06:24:03 compute-0 ceph-mon[74654]: 11.17 scrub ok
Nov 29 06:24:03 compute-0 ceph-mon[74654]: pgmap v311: 305 pgs: 1 activating+remapped, 304 active+clean; 455 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 4/219 objects misplaced (1.826%)
Nov 29 06:24:03 compute-0 ceph-mon[74654]: 8.10 scrub starts
Nov 29 06:24:03 compute-0 ceph-mon[74654]: 8.10 scrub ok
Nov 29 06:24:03 compute-0 ceph-mon[74654]: 11.b scrub starts
Nov 29 06:24:03 compute-0 ceph-mon[74654]: 11.b scrub ok
Nov 29 06:24:03 compute-0 ceph-mon[74654]: 11.14 scrub starts
Nov 29 06:24:03 compute-0 ceph-mon[74654]: 11.14 scrub ok
Nov 29 06:24:03 compute-0 ceph-mon[74654]: 11.c scrub starts
Nov 29 06:24:04 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v316: 305 pgs: 1 active+clean+scrubbing, 1 active+remapped, 303 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:24:04 compute-0 podman[104562]: 2025-11-29 06:24:04.557898887 +0000 UTC m=+16.101987183 container remove c8e2c0de8d45b275dfe3a29a2e288620cc932118c2cdeba9e7622f57b6edaa19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_greider, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 06:24:04 compute-0 sudo[105591]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dolzfwbxmrhzqnvfkbdgcxozgqzyutjx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397444.216265-1229-243357930088234/AnsiballZ_systemd.py'
Nov 29 06:24:04 compute-0 sudo[105591]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:24:04 compute-0 sudo[104326]: pam_unix(sudo:session): session closed for user root
Nov 29 06:24:04 compute-0 systemd[1]: libpod-conmon-c8e2c0de8d45b275dfe3a29a2e288620cc932118c2cdeba9e7622f57b6edaa19.scope: Deactivated successfully.
Nov 29 06:24:04 compute-0 python3.9[105593]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 06:24:04 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:24:04 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:24:04 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:24:04.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:24:04 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} v 0) v1
Nov 29 06:24:04 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Nov 29 06:24:04 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 06:24:05 compute-0 sudo[105591]: pam_unix(sudo:session): session closed for user root
Nov 29 06:24:05 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Nov 29 06:24:05 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e107 e107: 3 total, 3 up, 3 in
Nov 29 06:24:05 compute-0 sudo[105748]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ppjqawvrlwpjgcwecvezvxfdvvlghdnr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397445.1496887-1229-153577155375981/AnsiballZ_systemd.py'
Nov 29 06:24:05 compute-0 sudo[105748]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:24:05 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 11.11 scrub starts
Nov 29 06:24:05 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 11.11 scrub ok
Nov 29 06:24:05 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e107: 3 total, 3 up, 3 in
Nov 29 06:24:05 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:24:05 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:24:05 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:24:05.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:24:05 compute-0 python3.9[105750]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 06:24:05 compute-0 ceph-mon[74654]: pgmap v312: 305 pgs: 1 active+remapped, 304 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 10 B/s, 0 objects/s recovering
Nov 29 06:24:05 compute-0 ceph-mon[74654]: 11.c scrub ok
Nov 29 06:24:05 compute-0 ceph-mon[74654]: 9.17 deep-scrub starts
Nov 29 06:24:05 compute-0 ceph-mon[74654]: 9.17 deep-scrub ok
Nov 29 06:24:05 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Nov 29 06:24:05 compute-0 ceph-mon[74654]: 8.17 scrub starts
Nov 29 06:24:05 compute-0 ceph-mon[74654]: 8.17 scrub ok
Nov 29 06:24:05 compute-0 ceph-mon[74654]: pgmap v313: 305 pgs: 1 active+remapped, 304 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 10 B/s, 0 objects/s recovering
Nov 29 06:24:05 compute-0 ceph-mon[74654]: osdmap e106: 3 total, 3 up, 3 in
Nov 29 06:24:05 compute-0 ceph-mon[74654]: 11.d scrub starts
Nov 29 06:24:05 compute-0 ceph-mon[74654]: 9.1b scrub starts
Nov 29 06:24:05 compute-0 ceph-mon[74654]: 9.1b scrub ok
Nov 29 06:24:05 compute-0 ceph-mon[74654]: pgmap v315: 305 pgs: 1 active+remapped, 304 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 10 B/s, 0 objects/s recovering
Nov 29 06:24:05 compute-0 ceph-mon[74654]: 11.d scrub ok
Nov 29 06:24:05 compute-0 ceph-mon[74654]: 11.10 scrub starts
Nov 29 06:24:05 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Nov 29 06:24:05 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Nov 29 06:24:05 compute-0 ceph-mon[74654]: 9.7 deep-scrub starts
Nov 29 06:24:05 compute-0 ceph-mon[74654]: 9.7 deep-scrub ok
Nov 29 06:24:05 compute-0 ceph-mon[74654]: 11.10 scrub ok
Nov 29 06:24:05 compute-0 ceph-mon[74654]: pgmap v316: 305 pgs: 1 active+clean+scrubbing, 1 active+remapped, 303 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:24:05 compute-0 ceph-mon[74654]: 9.b scrub starts
Nov 29 06:24:05 compute-0 ceph-mon[74654]: 9.b scrub ok
Nov 29 06:24:05 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Nov 29 06:24:05 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:24:05 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 06:24:06 compute-0 sshd-session[105594]: Received disconnect from 104.208.108.166 port 5500:11: Bye Bye [preauth]
Nov 29 06:24:06 compute-0 sshd-session[105594]: Disconnected from authenticating user root 104.208.108.166 port 5500 [preauth]
Nov 29 06:24:06 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v318: 305 pgs: 1 active+clean+scrubbing, 1 active+remapped, 303 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:24:06 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} v 0) v1
Nov 29 06:24:06 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Nov 29 06:24:06 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e107 do_prune osdmap full prune enabled
Nov 29 06:24:06 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 11.15 deep-scrub starts
Nov 29 06:24:06 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 11.15 deep-scrub ok
Nov 29 06:24:06 compute-0 sudo[105748]: pam_unix(sudo:session): session closed for user root
Nov 29 06:24:06 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:24:06 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:24:06 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:24:06.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:24:07 compute-0 sshd-session[95705]: Connection closed by 192.168.122.30 port 52984
Nov 29 06:24:07 compute-0 sshd-session[95702]: pam_unix(sshd:session): session closed for user zuul
Nov 29 06:24:07 compute-0 systemd[1]: session-35.scope: Deactivated successfully.
Nov 29 06:24:07 compute-0 systemd[1]: session-35.scope: Consumed 1min 11.587s CPU time.
Nov 29 06:24:07 compute-0 systemd-logind[797]: Session 35 logged out. Waiting for processes to exit.
Nov 29 06:24:07 compute-0 systemd-logind[797]: Removed session 35.
Nov 29 06:24:07 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:24:07 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev 526ebd6b-5022-488f-94e1-22537738e9ee does not exist
Nov 29 06:24:07 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev 67a0cce2-3301-4f33-bd35-bfbc77f648b8 does not exist
Nov 29 06:24:07 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev 34152fcc-0354-4358-9e09-156df2d0b0fe does not exist
Nov 29 06:24:07 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:24:07 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:24:07 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:24:07.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:24:07 compute-0 sudo[105778]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:24:07 compute-0 sudo[105778]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:24:07 compute-0 sudo[105778]: pam_unix(sudo:session): session closed for user root
Nov 29 06:24:07 compute-0 sudo[105803]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 06:24:07 compute-0 sudo[105803]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:24:07 compute-0 sudo[105803]: pam_unix(sudo:session): session closed for user root
Nov 29 06:24:08 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v319: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:24:08 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} v 0) v1
Nov 29 06:24:08 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Nov 29 06:24:08 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:24:08 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:24:08 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:24:08.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:24:09 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:24:09 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:24:09 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:24:09.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:24:10 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v320: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:24:10 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} v 0) v1
Nov 29 06:24:10 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Nov 29 06:24:10 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Nov 29 06:24:10 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Nov 29 06:24:10 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Nov 29 06:24:10 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Nov 29 06:24:10 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e108 e108: 3 total, 3 up, 3 in
Nov 29 06:24:10 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Nov 29 06:24:10 compute-0 ceph-mon[74654]: 11.1e scrub starts
Nov 29 06:24:10 compute-0 ceph-mon[74654]: 11.1e scrub ok
Nov 29 06:24:10 compute-0 ceph-mon[74654]: 11.11 scrub starts
Nov 29 06:24:10 compute-0 ceph-mon[74654]: 11.11 scrub ok
Nov 29 06:24:10 compute-0 ceph-mon[74654]: osdmap e107: 3 total, 3 up, 3 in
Nov 29 06:24:10 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:24:10 compute-0 ceph-mon[74654]: pgmap v318: 305 pgs: 1 active+clean+scrubbing, 1 active+remapped, 303 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:24:10 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Nov 29 06:24:10 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 11.18 scrub starts
Nov 29 06:24:10 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:24:10 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:24:10 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:24:10.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:24:10 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e108: 3 total, 3 up, 3 in
Nov 29 06:24:11 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 11.18 scrub ok
Nov 29 06:24:11 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e108 do_prune osdmap full prune enabled
Nov 29 06:24:11 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:24:11 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:24:11 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:24:11.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:24:12 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v322: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:24:12 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} v 0) v1
Nov 29 06:24:12 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Nov 29 06:24:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 06:24:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:24:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 06:24:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:24:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:24:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:24:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:24:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:24:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:24:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:24:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:24:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:24:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 06:24:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:24:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:24:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:24:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Nov 29 06:24:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:24:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 06:24:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:24:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:24:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:24:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 06:24:12 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:24:12 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:24:12 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:24:12.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:24:13 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Nov 29 06:24:13 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Nov 29 06:24:13 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e109 e109: 3 total, 3 up, 3 in
Nov 29 06:24:13 compute-0 ceph-mon[74654]: 11.15 deep-scrub starts
Nov 29 06:24:13 compute-0 ceph-mon[74654]: 11.15 deep-scrub ok
Nov 29 06:24:13 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:24:13 compute-0 ceph-mon[74654]: pgmap v319: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:24:13 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Nov 29 06:24:13 compute-0 ceph-mon[74654]: pgmap v320: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:24:13 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Nov 29 06:24:13 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Nov 29 06:24:13 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Nov 29 06:24:13 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Nov 29 06:24:13 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Nov 29 06:24:13 compute-0 ceph-mon[74654]: 11.18 scrub starts
Nov 29 06:24:13 compute-0 ceph-mon[74654]: osdmap e108: 3 total, 3 up, 3 in
Nov 29 06:24:13 compute-0 ceph-mon[74654]: 11.7 deep-scrub starts
Nov 29 06:24:13 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:24:13 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:24:13 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:24:13.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:24:13 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e109: 3 total, 3 up, 3 in
Nov 29 06:24:14 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v324: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:24:14 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e109 do_prune osdmap full prune enabled
Nov 29 06:24:14 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} v 0) v1
Nov 29 06:24:14 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Nov 29 06:24:14 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:24:14 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:24:14 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:24:14.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:24:15 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 11.1f deep-scrub starts
Nov 29 06:24:15 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:24:15 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:24:15 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:24:15.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:24:16 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v325: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:24:16 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} v 0) v1
Nov 29 06:24:16 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Nov 29 06:24:16 compute-0 sshd-session[105832]: Accepted publickey for zuul from 192.168.122.30 port 45906 ssh2: ECDSA SHA256:q0RMlXdalxA6snNWza7TmIndlwLWLLpO+sXhiGKqO/I
Nov 29 06:24:16 compute-0 systemd-logind[797]: New session 36 of user zuul.
Nov 29 06:24:16 compute-0 systemd[1]: Started Session 36 of User zuul.
Nov 29 06:24:16 compute-0 sshd-session[105832]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 06:24:16 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 10.8 deep-scrub starts
Nov 29 06:24:16 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:24:16 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:24:16 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:24:16.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:24:17 compute-0 python3.9[105987]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 06:24:17 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:24:17 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:24:17 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:24:17.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:24:18 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 11.1f deep-scrub ok
Nov 29 06:24:18 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 10.8 deep-scrub ok
Nov 29 06:24:18 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v326: 305 pgs: 2 active+clean+scrubbing+deep, 303 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:24:18 compute-0 sudo[106144]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vjehiykobnwpqjmqzvxrewtigeqncsan ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397458.3563178-73-160723285987829/AnsiballZ_getent.py'
Nov 29 06:24:18 compute-0 sudo[106144]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:24:18 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:24:18 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:24:18 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:24:18.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:24:19 compute-0 python3.9[106147]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Nov 29 06:24:19 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 10.14 deep-scrub starts
Nov 29 06:24:19 compute-0 sudo[106144]: pam_unix(sudo:session): session closed for user root
Nov 29 06:24:19 compute-0 sshd-session[105992]: Invalid user odoo15 from 115.190.37.201 port 60430
Nov 29 06:24:19 compute-0 sshd-session[105992]: Received disconnect from 115.190.37.201 port 60430:11: Bye Bye [preauth]
Nov 29 06:24:19 compute-0 sshd-session[105992]: Disconnected from invalid user odoo15 115.190.37.201 port 60430 [preauth]
Nov 29 06:24:19 compute-0 sudo[106173]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:24:19 compute-0 sudo[106173]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:24:19 compute-0 sudo[106173]: pam_unix(sudo:session): session closed for user root
Nov 29 06:24:19 compute-0 sudo[106198]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:24:19 compute-0 sudo[106198]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:24:19 compute-0 sudo[106198]: pam_unix(sudo:session): session closed for user root
Nov 29 06:24:19 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} v 0) v1
Nov 29 06:24:19 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Nov 29 06:24:19 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 10.14 deep-scrub ok
Nov 29 06:24:19 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Nov 29 06:24:19 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e110 e110: 3 total, 3 up, 3 in
Nov 29 06:24:19 compute-0 ceph-mon[74654]: 11.18 scrub ok
Nov 29 06:24:19 compute-0 ceph-mon[74654]: 9.13 scrub starts
Nov 29 06:24:19 compute-0 ceph-mon[74654]: 9.13 scrub ok
Nov 29 06:24:19 compute-0 ceph-mon[74654]: 11.7 deep-scrub ok
Nov 29 06:24:19 compute-0 ceph-mon[74654]: pgmap v322: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:24:19 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Nov 29 06:24:19 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Nov 29 06:24:19 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:24:19 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:24:19 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:24:19.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:24:19 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e110: 3 total, 3 up, 3 in
Nov 29 06:24:19 compute-0 sudo[106348]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-otwucnvcgutomkhmcltxetbrcibtsloe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397459.632728-109-21206820357876/AnsiballZ_setup.py'
Nov 29 06:24:19 compute-0 sudo[106348]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:24:20 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v328: 305 pgs: 2 active+clean+scrubbing+deep, 303 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:24:20 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} v 0) v1
Nov 29 06:24:20 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Nov 29 06:24:20 compute-0 python3.9[106350]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 06:24:20 compute-0 sudo[106348]: pam_unix(sudo:session): session closed for user root
Nov 29 06:24:20 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e110 do_prune osdmap full prune enabled
Nov 29 06:24:20 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:24:20 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:24:20 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:24:20.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:24:21 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Nov 29 06:24:21 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Nov 29 06:24:21 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Nov 29 06:24:21 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Nov 29 06:24:21 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e111 e111: 3 total, 3 up, 3 in
Nov 29 06:24:21 compute-0 sudo[106433]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjsvlqwlhhwtmkbydefpyfzixcicwtxs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397459.632728-109-21206820357876/AnsiballZ_dnf.py'
Nov 29 06:24:21 compute-0 sudo[106433]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:24:21 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 10.13 deep-scrub starts
Nov 29 06:24:21 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 10.13 deep-scrub ok
Nov 29 06:24:21 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e111: 3 total, 3 up, 3 in
Nov 29 06:24:21 compute-0 python3.9[106435]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 29 06:24:21 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:24:21 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:24:21 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:24:21.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:24:22 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 10.5 scrub starts
Nov 29 06:24:22 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v330: 305 pgs: 2 active+clean+scrubbing+deep, 303 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:24:22 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} v 0) v1
Nov 29 06:24:22 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Nov 29 06:24:22 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e111 do_prune osdmap full prune enabled
Nov 29 06:24:22 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 10.5 scrub ok
Nov 29 06:24:22 compute-0 sudo[106433]: pam_unix(sudo:session): session closed for user root
Nov 29 06:24:22 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:24:22 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:24:22 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:24:22.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:24:23 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:24:23 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:24:23 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:24:23.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:24:24 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v331: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:24:24 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} v 0) v1
Nov 29 06:24:24 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Nov 29 06:24:24 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 10.1b scrub starts
Nov 29 06:24:24 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 10.1b scrub ok
Nov 29 06:24:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:24:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:24:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:24:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:24:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:24:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:24:24 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Nov 29 06:24:24 compute-0 ceph-mon[74654]: osdmap e109: 3 total, 3 up, 3 in
Nov 29 06:24:24 compute-0 ceph-mon[74654]: pgmap v324: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:24:24 compute-0 ceph-mon[74654]: 9.3 scrub starts
Nov 29 06:24:24 compute-0 ceph-mon[74654]: 9.3 scrub ok
Nov 29 06:24:24 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Nov 29 06:24:24 compute-0 ceph-mon[74654]: 8.1b scrub starts
Nov 29 06:24:24 compute-0 ceph-mon[74654]: 8.1b scrub ok
Nov 29 06:24:24 compute-0 ceph-mon[74654]: 11.1f deep-scrub starts
Nov 29 06:24:24 compute-0 ceph-mon[74654]: pgmap v325: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:24:24 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Nov 29 06:24:24 compute-0 ceph-mon[74654]: 9.15 scrub starts
Nov 29 06:24:24 compute-0 ceph-mon[74654]: 9.15 scrub ok
Nov 29 06:24:24 compute-0 ceph-mon[74654]: 10.8 deep-scrub starts
Nov 29 06:24:24 compute-0 ceph-mon[74654]: 9.5 scrub starts
Nov 29 06:24:24 compute-0 ceph-mon[74654]: 9.5 scrub ok
Nov 29 06:24:24 compute-0 ceph-mon[74654]: 11.1f deep-scrub ok
Nov 29 06:24:24 compute-0 ceph-mon[74654]: 10.8 deep-scrub ok
Nov 29 06:24:24 compute-0 ceph-mon[74654]: pgmap v326: 305 pgs: 2 active+clean+scrubbing+deep, 303 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:24:24 compute-0 ceph-mon[74654]: 9.9 scrub starts
Nov 29 06:24:24 compute-0 ceph-mon[74654]: 9.9 scrub ok
Nov 29 06:24:24 compute-0 ceph-mon[74654]: 10.14 deep-scrub starts
Nov 29 06:24:24 compute-0 ceph-mon[74654]: 9.19 scrub starts
Nov 29 06:24:24 compute-0 ceph-mon[74654]: 8.4 deep-scrub starts
Nov 29 06:24:24 compute-0 ceph-mon[74654]: 8.4 deep-scrub ok
Nov 29 06:24:24 compute-0 ceph-mon[74654]: 9.19 scrub ok
Nov 29 06:24:24 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Nov 29 06:24:24 compute-0 ceph-mon[74654]: 10.14 deep-scrub ok
Nov 29 06:24:24 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Nov 29 06:24:24 compute-0 ceph-mon[74654]: osdmap e110: 3 total, 3 up, 3 in
Nov 29 06:24:24 compute-0 ceph-mon[74654]: pgmap v328: 305 pgs: 2 active+clean+scrubbing+deep, 303 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:24:24 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Nov 29 06:24:24 compute-0 ceph-mon[74654]: 8.12 scrub starts
Nov 29 06:24:24 compute-0 ceph-mon[74654]: 8.12 scrub ok
Nov 29 06:24:24 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:24:24 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:24:24 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:24:24.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:24:25 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Nov 29 06:24:25 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e112 e112: 3 total, 3 up, 3 in
Nov 29 06:24:25 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e112: 3 total, 3 up, 3 in
Nov 29 06:24:25 compute-0 sshd-session[106462]: Received disconnect from 138.124.186.225 port 50452:11: Bye Bye [preauth]
Nov 29 06:24:25 compute-0 sshd-session[106462]: Disconnected from authenticating user root 138.124.186.225 port 50452 [preauth]
Nov 29 06:24:25 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:24:25 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:24:25 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:24:25.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:24:26 compute-0 sudo[106590]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kscphznklzkecwwafcqntzbjfygifmiw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397465.693131-151-239530966099875/AnsiballZ_dnf.py'
Nov 29 06:24:26 compute-0 sudo[106590]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:24:26 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v333: 305 pgs: 1 active+clean+scrubbing, 1 unknown, 1 remapped+peering, 302 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:24:26 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 10.18 scrub starts
Nov 29 06:24:26 compute-0 python3.9[106592]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 06:24:26 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 10.18 scrub ok
Nov 29 06:24:26 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e112 do_prune osdmap full prune enabled
Nov 29 06:24:26 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:24:26 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:24:26 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:24:26.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:24:27 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 10.2 scrub starts
Nov 29 06:24:27 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 10.2 scrub ok
Nov 29 06:24:27 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:24:27 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:24:27 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:24:27.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:24:28 compute-0 sudo[106590]: pam_unix(sudo:session): session closed for user root
Nov 29 06:24:28 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v334: 305 pgs: 1 active+clean+scrubbing, 1 unknown, 1 remapped+peering, 302 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:24:28 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 10.19 scrub starts
Nov 29 06:24:28 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 10.19 scrub ok
Nov 29 06:24:28 compute-0 sshd-session[106618]: Invalid user train1 from 176.109.67.96 port 60866
Nov 29 06:24:28 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:24:28 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:24:28 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:24:28.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:24:28 compute-0 sshd-session[106618]: Received disconnect from 176.109.67.96 port 60866:11: Bye Bye [preauth]
Nov 29 06:24:28 compute-0 sshd-session[106618]: Disconnected from invalid user train1 176.109.67.96 port 60866 [preauth]
Nov 29 06:24:29 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Nov 29 06:24:29 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Nov 29 06:24:29 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Nov 29 06:24:29 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Nov 29 06:24:29 compute-0 ceph-mon[74654]: 10.13 deep-scrub starts
Nov 29 06:24:29 compute-0 ceph-mon[74654]: 10.13 deep-scrub ok
Nov 29 06:24:29 compute-0 ceph-mon[74654]: osdmap e111: 3 total, 3 up, 3 in
Nov 29 06:24:29 compute-0 ceph-mon[74654]: 11.1d scrub starts
Nov 29 06:24:29 compute-0 ceph-mon[74654]: 11.1d scrub ok
Nov 29 06:24:29 compute-0 ceph-mon[74654]: 9.8 scrub starts
Nov 29 06:24:29 compute-0 ceph-mon[74654]: 10.5 scrub starts
Nov 29 06:24:29 compute-0 ceph-mon[74654]: pgmap v330: 305 pgs: 2 active+clean+scrubbing+deep, 303 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:24:29 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Nov 29 06:24:29 compute-0 ceph-mon[74654]: 9.8 scrub ok
Nov 29 06:24:29 compute-0 ceph-mon[74654]: 10.5 scrub ok
Nov 29 06:24:29 compute-0 ceph-mon[74654]: 8.8 scrub starts
Nov 29 06:24:29 compute-0 ceph-mon[74654]: 8.8 scrub ok
Nov 29 06:24:29 compute-0 ceph-mon[74654]: 9.18 scrub starts
Nov 29 06:24:29 compute-0 ceph-mon[74654]: pgmap v331: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:24:29 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Nov 29 06:24:29 compute-0 ceph-mon[74654]: 9.18 scrub ok
Nov 29 06:24:29 compute-0 ceph-mon[74654]: 10.1b scrub starts
Nov 29 06:24:29 compute-0 ceph-mon[74654]: 10.1b scrub ok
Nov 29 06:24:29 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Nov 29 06:24:29 compute-0 ceph-mon[74654]: osdmap e112: 3 total, 3 up, 3 in
Nov 29 06:24:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 06:24:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 06:24:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 06:24:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 06:24:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 06:24:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 06:24:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 06:24:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 06:24:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 06:24:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 06:24:29 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:24:29 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:24:29 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:24:29.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:24:30 compute-0 sudo[106747]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uhmxgxvjbuspqvvsjbjddrltcwpwridc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397469.246208-175-17971476400704/AnsiballZ_systemd.py'
Nov 29 06:24:30 compute-0 sudo[106747]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:24:30 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v335: 305 pgs: 1 active+remapped, 1 unknown, 303 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:24:30 compute-0 python3.9[106749]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 29 06:24:30 compute-0 sudo[106747]: pam_unix(sudo:session): session closed for user root
Nov 29 06:24:30 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Nov 29 06:24:30 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e113 e113: 3 total, 3 up, 3 in
Nov 29 06:24:30 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e113: 3 total, 3 up, 3 in
Nov 29 06:24:30 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:24:30 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:24:30 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:24:30.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:24:31 compute-0 ceph-mon[74654]: 11.f scrub starts
Nov 29 06:24:31 compute-0 ceph-mon[74654]: 11.f scrub ok
Nov 29 06:24:31 compute-0 ceph-mon[74654]: pgmap v333: 305 pgs: 1 active+clean+scrubbing, 1 unknown, 1 remapped+peering, 302 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:24:31 compute-0 ceph-mon[74654]: 10.18 scrub starts
Nov 29 06:24:31 compute-0 ceph-mon[74654]: 10.18 scrub ok
Nov 29 06:24:31 compute-0 ceph-mon[74654]: 10.2 scrub starts
Nov 29 06:24:31 compute-0 ceph-mon[74654]: 10.2 scrub ok
Nov 29 06:24:31 compute-0 ceph-mon[74654]: 8.14 scrub starts
Nov 29 06:24:31 compute-0 ceph-mon[74654]: 8.14 scrub ok
Nov 29 06:24:31 compute-0 ceph-mon[74654]: pgmap v334: 305 pgs: 1 active+clean+scrubbing, 1 unknown, 1 remapped+peering, 302 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:24:31 compute-0 ceph-mon[74654]: 10.19 scrub starts
Nov 29 06:24:31 compute-0 ceph-mon[74654]: 10.19 scrub ok
Nov 29 06:24:31 compute-0 ceph-mon[74654]: 11.4 scrub starts
Nov 29 06:24:31 compute-0 ceph-mon[74654]: 11.4 scrub ok
Nov 29 06:24:31 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e113 do_prune osdmap full prune enabled
Nov 29 06:24:31 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:24:31 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:24:31 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:24:31.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:24:32 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v337: 305 pgs: 1 active+remapped, 1 peering, 303 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:24:32 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 10.15 scrub starts
Nov 29 06:24:32 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 10.15 scrub ok
Nov 29 06:24:32 compute-0 python3.9[106903]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 06:24:32 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:24:32 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:24:32 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:24:32.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:24:33 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:24:33 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:24:33 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:24:33.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:24:34 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v338: 305 pgs: 1 active+remapped, 1 peering, 303 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 0 B/s, 0 objects/s recovering
Nov 29 06:24:34 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e114 e114: 3 total, 3 up, 3 in
Nov 29 06:24:34 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e114: 3 total, 3 up, 3 in
Nov 29 06:24:34 compute-0 ceph-mon[74654]: pgmap v335: 305 pgs: 1 active+remapped, 1 unknown, 303 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:24:34 compute-0 ceph-mon[74654]: 11.1a scrub starts
Nov 29 06:24:34 compute-0 ceph-mon[74654]: 11.1a scrub ok
Nov 29 06:24:34 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Nov 29 06:24:34 compute-0 ceph-mon[74654]: osdmap e113: 3 total, 3 up, 3 in
Nov 29 06:24:34 compute-0 sudo[107054]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kaxqltfzjrzcnhnzfegnhlplxsqwrdbq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397474.2400389-229-259756376973545/AnsiballZ_sefcontext.py'
Nov 29 06:24:34 compute-0 ceph-mon[74654]: 8.19 scrub starts
Nov 29 06:24:34 compute-0 sudo[107054]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:24:34 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:24:34 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:24:34 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:24:34.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:24:35 compute-0 python3.9[107057]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Nov 29 06:24:35 compute-0 sudo[107054]: pam_unix(sudo:session): session closed for user root
Nov 29 06:24:35 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:24:35 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:24:35 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:24:35.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:24:35 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e114 do_prune osdmap full prune enabled
Nov 29 06:24:36 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e115 e115: 3 total, 3 up, 3 in
Nov 29 06:24:36 compute-0 ceph-mon[74654]: 8.19 scrub ok
Nov 29 06:24:36 compute-0 ceph-mon[74654]: pgmap v337: 305 pgs: 1 active+remapped, 1 peering, 303 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:24:36 compute-0 ceph-mon[74654]: 10.15 scrub starts
Nov 29 06:24:36 compute-0 ceph-mon[74654]: 10.15 scrub ok
Nov 29 06:24:36 compute-0 ceph-mon[74654]: 11.1c scrub starts
Nov 29 06:24:36 compute-0 ceph-mon[74654]: 11.5 scrub starts
Nov 29 06:24:36 compute-0 ceph-mon[74654]: pgmap v338: 305 pgs: 1 active+remapped, 1 peering, 303 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 0 B/s, 0 objects/s recovering
Nov 29 06:24:36 compute-0 ceph-mon[74654]: osdmap e114: 3 total, 3 up, 3 in
Nov 29 06:24:36 compute-0 ceph-mon[74654]: 11.5 scrub ok
Nov 29 06:24:36 compute-0 ceph-mon[74654]: 11.1c scrub ok
Nov 29 06:24:36 compute-0 ceph-mon[74654]: 11.1 deep-scrub starts
Nov 29 06:24:36 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e115: 3 total, 3 up, 3 in
Nov 29 06:24:36 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v341: 305 pgs: 1 active+remapped, 1 peering, 303 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 8.2 KiB/s rd, 170 B/s wr, 14 op/s; 36 B/s, 1 objects/s recovering
Nov 29 06:24:36 compute-0 python3.9[107207]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 06:24:36 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:24:36 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:24:36 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:24:36.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:24:37 compute-0 sudo[107364]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lyeymqoqzojrdgutuxfmododmwxwzyis ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397476.970677-283-111440478192870/AnsiballZ_dnf.py'
Nov 29 06:24:37 compute-0 sudo[107364]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:24:37 compute-0 python3.9[107366]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 06:24:37 compute-0 ceph-mon[74654]: 11.1 deep-scrub ok
Nov 29 06:24:37 compute-0 ceph-mon[74654]: osdmap e115: 3 total, 3 up, 3 in
Nov 29 06:24:37 compute-0 ceph-mon[74654]: pgmap v341: 305 pgs: 1 active+remapped, 1 peering, 303 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 8.2 KiB/s rd, 170 B/s wr, 14 op/s; 36 B/s, 1 objects/s recovering
Nov 29 06:24:37 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:24:37 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:24:37 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:24:37.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:24:38 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v342: 305 pgs: 1 active+remapped, 304 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 6.5 KiB/s rd, 0 B/s wr, 11 op/s; 29 B/s, 0 objects/s recovering
Nov 29 06:24:38 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} v 0) v1
Nov 29 06:24:38 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Nov 29 06:24:38 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e115 do_prune osdmap full prune enabled
Nov 29 06:24:38 compute-0 sudo[107364]: pam_unix(sudo:session): session closed for user root
Nov 29 06:24:38 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:24:38 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:24:38 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:24:38.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:24:39 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Nov 29 06:24:39 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e116 e116: 3 total, 3 up, 3 in
Nov 29 06:24:39 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e116: 3 total, 3 up, 3 in
Nov 29 06:24:39 compute-0 ceph-mon[74654]: pgmap v342: 305 pgs: 1 active+remapped, 304 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 6.5 KiB/s rd, 0 B/s wr, 11 op/s; 29 B/s, 0 objects/s recovering
Nov 29 06:24:39 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Nov 29 06:24:39 compute-0 sudo[107395]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:24:39 compute-0 sudo[107395]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:24:39 compute-0 sudo[107395]: pam_unix(sudo:session): session closed for user root
Nov 29 06:24:39 compute-0 sudo[107438]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:24:39 compute-0 sudo[107438]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:24:39 compute-0 sudo[107438]: pam_unix(sudo:session): session closed for user root
Nov 29 06:24:39 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:24:39 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:24:39 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:24:39.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:24:40 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v344: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:24:40 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} v 0) v1
Nov 29 06:24:40 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Nov 29 06:24:40 compute-0 sudo[107570]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bobhywwsnzbfqodpybuunnippkpqfbmt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397479.672294-307-97117709593531/AnsiballZ_command.py'
Nov 29 06:24:40 compute-0 sudo[107570]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:24:40 compute-0 python3.9[107572]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:24:40 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e116 do_prune osdmap full prune enabled
Nov 29 06:24:40 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Nov 29 06:24:40 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e117 e117: 3 total, 3 up, 3 in
Nov 29 06:24:40 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e117: 3 total, 3 up, 3 in
Nov 29 06:24:40 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Nov 29 06:24:40 compute-0 ceph-mon[74654]: osdmap e116: 3 total, 3 up, 3 in
Nov 29 06:24:40 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Nov 29 06:24:40 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 117 pg[9.19( empty local-lis/les=0/0 n=0 ec=58/47 lis/c=84/84 les/c/f=85/85/0 sis=117) [1] r=0 lpr=117 pi=[84,117)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:24:40 compute-0 sshd-session[107393]: Invalid user dmdba from 118.193.39.127 port 42880
Nov 29 06:24:40 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:24:40 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:24:40 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:24:40.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:24:41 compute-0 sshd-session[107393]: Received disconnect from 118.193.39.127 port 42880:11: Bye Bye [preauth]
Nov 29 06:24:41 compute-0 sshd-session[107393]: Disconnected from invalid user dmdba 118.193.39.127 port 42880 [preauth]
Nov 29 06:24:41 compute-0 sudo[107570]: pam_unix(sudo:session): session closed for user root
Nov 29 06:24:41 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e117 do_prune osdmap full prune enabled
Nov 29 06:24:41 compute-0 ceph-mon[74654]: pgmap v344: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:24:41 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Nov 29 06:24:41 compute-0 ceph-mon[74654]: osdmap e117: 3 total, 3 up, 3 in
Nov 29 06:24:41 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e118 e118: 3 total, 3 up, 3 in
Nov 29 06:24:41 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e118: 3 total, 3 up, 3 in
Nov 29 06:24:41 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 118 pg[9.19( empty local-lis/les=0/0 n=0 ec=58/47 lis/c=84/84 les/c/f=85/85/0 sis=118) [1]/[2] r=-1 lpr=118 pi=[84,118)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:24:41 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 118 pg[9.19( empty local-lis/les=0/0 n=0 ec=58/47 lis/c=84/84 les/c/f=85/85/0 sis=118) [1]/[2] r=-1 lpr=118 pi=[84,118)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 06:24:41 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:24:41 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:24:41 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:24:41.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:24:42 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v347: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:24:42 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} v 0) v1
Nov 29 06:24:42 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Nov 29 06:24:42 compute-0 sudo[107858]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjrudyouudohdqguuugaoqqzbibqbczq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397481.511942-331-131360123320745/AnsiballZ_file.py'
Nov 29 06:24:42 compute-0 sudo[107858]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:24:42 compute-0 python3.9[107860]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 29 06:24:42 compute-0 sudo[107858]: pam_unix(sudo:session): session closed for user root
Nov 29 06:24:42 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e118 do_prune osdmap full prune enabled
Nov 29 06:24:42 compute-0 ceph-mon[74654]: osdmap e118: 3 total, 3 up, 3 in
Nov 29 06:24:42 compute-0 ceph-mon[74654]: pgmap v347: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:24:42 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Nov 29 06:24:42 compute-0 ceph-mon[74654]: 9.16 scrub starts
Nov 29 06:24:42 compute-0 ceph-mon[74654]: 9.16 scrub ok
Nov 29 06:24:42 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:24:42 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:24:42 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:24:42.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:24:43 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Nov 29 06:24:43 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e119 e119: 3 total, 3 up, 3 in
Nov 29 06:24:43 compute-0 python3.9[108011]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 06:24:43 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e119: 3 total, 3 up, 3 in
Nov 29 06:24:43 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 119 pg[9.1a( empty local-lis/les=0/0 n=0 ec=58/47 lis/c=86/86 les/c/f=87/87/0 sis=119) [1] r=0 lpr=119 pi=[86,119)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:24:43 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:24:43 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:24:43 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:24:43.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:24:44 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 06:24:44 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e119 do_prune osdmap full prune enabled
Nov 29 06:24:44 compute-0 ceph-mon[74654]: 11.12 scrub starts
Nov 29 06:24:44 compute-0 ceph-mon[74654]: 11.12 scrub ok
Nov 29 06:24:44 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Nov 29 06:24:44 compute-0 ceph-mon[74654]: osdmap e119: 3 total, 3 up, 3 in
Nov 29 06:24:44 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v349: 305 pgs: 1 active+recovering+remapped, 304 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 6/212 objects misplaced (2.830%)
Nov 29 06:24:44 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0) v1
Nov 29 06:24:44 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 29 06:24:44 compute-0 sudo[108163]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rqkiedvrbwzkecjzmojfmfugedozmrxg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397483.8816075-379-239950181415488/AnsiballZ_dnf.py'
Nov 29 06:24:44 compute-0 sudo[108163]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:24:44 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e120 e120: 3 total, 3 up, 3 in
Nov 29 06:24:44 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e120: 3 total, 3 up, 3 in
Nov 29 06:24:44 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 120 pg[9.1a( empty local-lis/les=0/0 n=0 ec=58/47 lis/c=86/86 les/c/f=87/87/0 sis=120) [1]/[0] r=-1 lpr=120 pi=[86,120)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:24:44 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 120 pg[9.1a( empty local-lis/les=0/0 n=0 ec=58/47 lis/c=86/86 les/c/f=87/87/0 sis=120) [1]/[0] r=-1 lpr=120 pi=[86,120)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 06:24:44 compute-0 python3.9[108166]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 06:24:44 compute-0 sshd-session[108165]: Invalid user autcom from 79.116.35.29 port 47564
Nov 29 06:24:44 compute-0 sshd-session[108165]: Received disconnect from 79.116.35.29 port 47564:11: Bye Bye [preauth]
Nov 29 06:24:44 compute-0 sshd-session[108165]: Disconnected from invalid user autcom 79.116.35.29 port 47564 [preauth]
Nov 29 06:24:44 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:24:44 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:24:44 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:24:44.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:24:45 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e120 do_prune osdmap full prune enabled
Nov 29 06:24:45 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:24:45 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:24:45 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:24:45.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:24:45 compute-0 ceph-mon[74654]: pgmap v349: 305 pgs: 1 active+recovering+remapped, 304 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 6/212 objects misplaced (2.830%)
Nov 29 06:24:45 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 29 06:24:45 compute-0 ceph-mon[74654]: osdmap e120: 3 total, 3 up, 3 in
Nov 29 06:24:46 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v351: 305 pgs: 1 active+recovering+remapped, 304 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 6/212 objects misplaced (2.830%)
Nov 29 06:24:46 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0) v1
Nov 29 06:24:46 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 29 06:24:46 compute-0 sudo[108163]: pam_unix(sudo:session): session closed for user root
Nov 29 06:24:46 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 29 06:24:46 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e121 e121: 3 total, 3 up, 3 in
Nov 29 06:24:46 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e121: 3 total, 3 up, 3 in
Nov 29 06:24:46 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 121 pg[9.1b( empty local-lis/les=0/0 n=0 ec=58/47 lis/c=71/71 les/c/f=72/72/0 sis=121) [1] r=0 lpr=121 pi=[71,121)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:24:46 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 121 pg[9.19( v 56'1130 (0'0,56'1130] local-lis/les=0/0 n=7 ec=58/47 lis/c=118/84 les/c/f=119/85/0 sis=121) [1] r=0 lpr=121 pi=[84,121)/1 luod=0'0 crt=56'1130 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:24:46 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 121 pg[9.19( v 56'1130 (0'0,56'1130] local-lis/les=0/0 n=7 ec=58/47 lis/c=118/84 les/c/f=119/85/0 sis=121) [1] r=0 lpr=121 pi=[84,121)/1 crt=56'1130 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:24:46 compute-0 sudo[108321]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvxnjqpfmoeiroliomhbajhuhziunuow ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397486.4197712-406-279035390846100/AnsiballZ_dnf.py'
Nov 29 06:24:46 compute-0 sudo[108321]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:24:46 compute-0 python3.9[108323]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 06:24:46 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:24:46 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:24:46 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:24:46.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:24:47 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e121 do_prune osdmap full prune enabled
Nov 29 06:24:47 compute-0 sshd-session[108310]: Invalid user guest123 from 31.6.212.12 port 51222
Nov 29 06:24:47 compute-0 sshd-session[108310]: Received disconnect from 31.6.212.12 port 51222:11: Bye Bye [preauth]
Nov 29 06:24:47 compute-0 sshd-session[108310]: Disconnected from invalid user guest123 31.6.212.12 port 51222 [preauth]
Nov 29 06:24:47 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:24:47 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:24:47 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:24:47.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:24:48 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v353: 305 pgs: 1 unknown, 1 active+remapped, 1 peering, 302 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 0 B/s, 0 objects/s recovering
Nov 29 06:24:48 compute-0 ceph-mon[74654]: pgmap v351: 305 pgs: 1 active+recovering+remapped, 304 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 6/212 objects misplaced (2.830%)
Nov 29 06:24:48 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 29 06:24:48 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 29 06:24:48 compute-0 ceph-mon[74654]: osdmap e121: 3 total, 3 up, 3 in
Nov 29 06:24:48 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:24:48 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:24:48 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:24:48.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:24:49 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:24:49 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:24:49 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:24:49.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:24:50 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v354: 305 pgs: 1 unknown, 1 active+remapped, 1 peering, 302 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 0 B/s, 0 objects/s recovering
Nov 29 06:24:50 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:24:50 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:24:50 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:24:50.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:24:51 compute-0 sudo[108321]: pam_unix(sudo:session): session closed for user root
Nov 29 06:24:51 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 29 06:24:51 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e122 e122: 3 total, 3 up, 3 in
Nov 29 06:24:51 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e122: 3 total, 3 up, 3 in
Nov 29 06:24:51 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 122 pg[9.1a( v 56'1130 (0'0,56'1130] local-lis/les=0/0 n=5 ec=58/47 lis/c=120/86 les/c/f=121/87/0 sis=122) [1] r=0 lpr=122 pi=[86,122)/1 luod=0'0 crt=56'1130 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:24:51 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 122 pg[9.1b( empty local-lis/les=0/0 n=0 ec=58/47 lis/c=71/71 les/c/f=72/72/0 sis=122) [1]/[2] r=-1 lpr=122 pi=[71,122)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:24:51 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 122 pg[9.1a( v 56'1130 (0'0,56'1130] local-lis/les=0/0 n=5 ec=58/47 lis/c=120/86 les/c/f=121/87/0 sis=122) [1] r=0 lpr=122 pi=[86,122)/1 crt=56'1130 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:24:51 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 122 pg[9.1b( empty local-lis/les=0/0 n=0 ec=58/47 lis/c=71/71 les/c/f=72/72/0 sis=122) [1]/[2] r=-1 lpr=122 pi=[71,122)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 06:24:51 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:24:51 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:24:51 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:24:51.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:24:51 compute-0 ceph-mon[74654]: 11.1b scrub starts
Nov 29 06:24:51 compute-0 ceph-mon[74654]: 11.1b scrub ok
Nov 29 06:24:51 compute-0 ceph-mon[74654]: pgmap v353: 305 pgs: 1 unknown, 1 active+remapped, 1 peering, 302 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 0 B/s, 0 objects/s recovering
Nov 29 06:24:51 compute-0 ceph-mon[74654]: 8.18 scrub starts
Nov 29 06:24:51 compute-0 ceph-mon[74654]: 8.18 scrub ok
Nov 29 06:24:51 compute-0 ceph-mon[74654]: pgmap v354: 305 pgs: 1 unknown, 1 active+remapped, 1 peering, 302 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 0 B/s, 0 objects/s recovering
Nov 29 06:24:51 compute-0 ceph-mon[74654]: 9.e scrub starts
Nov 29 06:24:51 compute-0 ceph-mon[74654]: 9.e scrub ok
Nov 29 06:24:51 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 122 pg[9.19( v 56'1130 (0'0,56'1130] local-lis/les=121/122 n=7 ec=58/47 lis/c=118/84 les/c/f=119/85/0 sis=121) [1] r=0 lpr=121 pi=[84,121)/1 crt=56'1130 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:24:52 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v356: 305 pgs: 1 unknown, 1 active+remapped, 1 peering, 302 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 14 B/s, 0 objects/s recovering
Nov 29 06:24:52 compute-0 sudo[108477]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fljjkopalgydjmawxldqadjbseozzrug ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397492.105882-442-119403433904414/AnsiballZ_stat.py'
Nov 29 06:24:52 compute-0 sudo[108477]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:24:52 compute-0 python3.9[108479]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 06:24:52 compute-0 sudo[108477]: pam_unix(sudo:session): session closed for user root
Nov 29 06:24:52 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:24:52 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:24:52 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:24:52.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:24:53 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e122 do_prune osdmap full prune enabled
Nov 29 06:24:53 compute-0 sudo[108632]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ecyjmfztwavhuhcfrwyyiyagusmccmkl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397492.917344-466-200570992198593/AnsiballZ_slurp.py'
Nov 29 06:24:53 compute-0 sudo[108632]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:24:53 compute-0 python3.9[108634]: ansible-ansible.builtin.slurp Invoked with path=/var/lib/edpm-config/os-net-config.returncode src=/var/lib/edpm-config/os-net-config.returncode
Nov 29 06:24:53 compute-0 sudo[108632]: pam_unix(sudo:session): session closed for user root
Nov 29 06:24:53 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:24:53 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:24:53 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:24:53.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:24:54 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v357: 305 pgs: 1 peering, 1 unknown, 303 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 13 B/s, 0 objects/s recovering
Nov 29 06:24:54 compute-0 ceph-mgr[74948]: [balancer INFO root] Optimize plan auto_2025-11-29_06:24:54
Nov 29 06:24:54 compute-0 ceph-mgr[74948]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 06:24:54 compute-0 ceph-mgr[74948]: [balancer INFO root] Some PGs (0.003279) are unknown; try again later
Nov 29 06:24:54 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e123 e123: 3 total, 3 up, 3 in
Nov 29 06:24:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:24:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:24:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:24:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:24:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:24:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:24:54 compute-0 ceph-mon[74654]: 9.1e scrub starts
Nov 29 06:24:54 compute-0 ceph-mon[74654]: 9.1e scrub ok
Nov 29 06:24:54 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 29 06:24:54 compute-0 ceph-mon[74654]: osdmap e122: 3 total, 3 up, 3 in
Nov 29 06:24:54 compute-0 ceph-mon[74654]: pgmap v356: 305 pgs: 1 unknown, 1 active+remapped, 1 peering, 302 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 14 B/s, 0 objects/s recovering
Nov 29 06:24:54 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e123: 3 total, 3 up, 3 in
Nov 29 06:24:54 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:24:54 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:24:54 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:24:54.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:24:55 compute-0 sshd-session[105835]: Connection closed by 192.168.122.30 port 45906
Nov 29 06:24:55 compute-0 sshd-session[105832]: pam_unix(sshd:session): session closed for user zuul
Nov 29 06:24:55 compute-0 systemd[1]: session-36.scope: Deactivated successfully.
Nov 29 06:24:55 compute-0 systemd[1]: session-36.scope: Consumed 19.544s CPU time.
Nov 29 06:24:55 compute-0 systemd-logind[797]: Session 36 logged out. Waiting for processes to exit.
Nov 29 06:24:55 compute-0 systemd-logind[797]: Removed session 36.
Nov 29 06:24:55 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 123 pg[9.1a( v 56'1130 (0'0,56'1130] local-lis/les=122/123 n=5 ec=58/47 lis/c=120/86 les/c/f=121/87/0 sis=122) [1] r=0 lpr=122 pi=[86,122)/1 crt=56'1130 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:24:55 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:24:55 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 06:24:55 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:24:55.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 06:24:56 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v359: 305 pgs: 1 peering, 1 unknown, 303 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:24:56 compute-0 ceph-mon[74654]: pgmap v357: 305 pgs: 1 peering, 1 unknown, 303 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 13 B/s, 0 objects/s recovering
Nov 29 06:24:56 compute-0 ceph-mon[74654]: osdmap e123: 3 total, 3 up, 3 in
Nov 29 06:24:56 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 06:24:56 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:24:56 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 06:24:56 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:24:56.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 06:24:57 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e123 do_prune osdmap full prune enabled
Nov 29 06:24:57 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:24:57 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:24:57 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:24:57.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:24:58 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v360: 305 pgs: 1 unknown, 304 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:24:58 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:24:58 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:24:58 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:24:58.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:24:59 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 9.1a scrub starts
Nov 29 06:24:59 compute-0 ceph-osd[85162]: log_channel(cluster) log [DBG] : 9.1a scrub ok
Nov 29 06:24:59 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e124 e124: 3 total, 3 up, 3 in
Nov 29 06:24:59 compute-0 sudo[108662]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:24:59 compute-0 sudo[108662]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:24:59 compute-0 sudo[108662]: pam_unix(sudo:session): session closed for user root
Nov 29 06:24:59 compute-0 ceph-mon[74654]: pgmap v359: 305 pgs: 1 peering, 1 unknown, 303 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:24:59 compute-0 sudo[108687]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:24:59 compute-0 sudo[108687]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:24:59 compute-0 sudo[108687]: pam_unix(sudo:session): session closed for user root
Nov 29 06:24:59 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:24:59 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:24:59 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:24:59.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:25:00 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v362: 305 pgs: 1 active+remapped, 304 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 13 B/s, 0 objects/s recovering
Nov 29 06:25:00 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 124 pg[9.1b( v 56'1130 (0'0,56'1130] local-lis/les=0/0 n=2 ec=58/47 lis/c=122/71 les/c/f=123/72/0 sis=124) [1] r=0 lpr=124 pi=[71,124)/1 luod=0'0 crt=56'1130 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:25:00 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 124 pg[9.1b( v 56'1130 (0'0,56'1130] local-lis/les=0/0 n=2 ec=58/47 lis/c=122/71 les/c/f=123/72/0 sis=124) [1] r=0 lpr=124 pi=[71,124)/1 crt=56'1130 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:25:00 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e124: 3 total, 3 up, 3 in
Nov 29 06:25:00 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} v 0) v1
Nov 29 06:25:00 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Nov 29 06:25:00 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e124 do_prune osdmap full prune enabled
Nov 29 06:25:00 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:25:00 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 06:25:00 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:25:00.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 06:25:01 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:25:01 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:25:01 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:25:01.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:25:01 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Nov 29 06:25:01 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e125 e125: 3 total, 3 up, 3 in
Nov 29 06:25:02 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v363: 305 pgs: 1 active+remapped, 304 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 13 B/s, 0 objects/s recovering
Nov 29 06:25:02 compute-0 sshd-session[108713]: Accepted publickey for zuul from 192.168.122.30 port 56098 ssh2: ECDSA SHA256:q0RMlXdalxA6snNWza7TmIndlwLWLLpO+sXhiGKqO/I
Nov 29 06:25:02 compute-0 ceph-mon[74654]: 9.6 deep-scrub starts
Nov 29 06:25:02 compute-0 ceph-mon[74654]: 9.6 deep-scrub ok
Nov 29 06:25:02 compute-0 ceph-mon[74654]: pgmap v360: 305 pgs: 1 unknown, 304 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:25:02 compute-0 systemd-logind[797]: New session 37 of user zuul.
Nov 29 06:25:02 compute-0 systemd[1]: Started Session 37 of User zuul.
Nov 29 06:25:02 compute-0 sshd-session[108713]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 06:25:02 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:25:02 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:25:02 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:25:02.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:25:03 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e125: 3 total, 3 up, 3 in
Nov 29 06:25:03 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} v 0) v1
Nov 29 06:25:03 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Nov 29 06:25:03 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:25:03 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 06:25:03 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:25:03.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 06:25:04 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v365: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:25:04 compute-0 python3.9[108867]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 06:25:04 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 125 pg[9.1b( v 56'1130 (0'0,56'1130] local-lis/les=124/125 n=2 ec=58/47 lis/c=122/71 les/c/f=123/72/0 sis=124) [1] r=0 lpr=124 pi=[71,124)/1 crt=56'1130 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:25:04 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:25:04 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:25:04 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:25:04.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:25:05 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e125 do_prune osdmap full prune enabled
Nov 29 06:25:05 compute-0 python3.9[109022]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 06:25:05 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:25:05 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 06:25:05 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:25:05.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 06:25:06 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v366: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:25:06 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Nov 29 06:25:06 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e126 e126: 3 total, 3 up, 3 in
Nov 29 06:25:06 compute-0 ceph-mon[74654]: 9.1a scrub starts
Nov 29 06:25:06 compute-0 ceph-mon[74654]: 9.1a scrub ok
Nov 29 06:25:06 compute-0 ceph-mon[74654]: pgmap v362: 305 pgs: 1 active+remapped, 304 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 13 B/s, 0 objects/s recovering
Nov 29 06:25:06 compute-0 ceph-mon[74654]: osdmap e124: 3 total, 3 up, 3 in
Nov 29 06:25:06 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Nov 29 06:25:06 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Nov 29 06:25:06 compute-0 ceph-mon[74654]: osdmap e125: 3 total, 3 up, 3 in
Nov 29 06:25:06 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Nov 29 06:25:06 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e126: 3 total, 3 up, 3 in
Nov 29 06:25:06 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 06:25:06 compute-0 python3.9[109215]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:25:06 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:25:06 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:25:06 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:25:06.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:25:07 compute-0 sshd-session[108717]: Connection closed by 192.168.122.30 port 56098
Nov 29 06:25:07 compute-0 sshd-session[108713]: pam_unix(sshd:session): session closed for user zuul
Nov 29 06:25:07 compute-0 systemd[1]: session-37.scope: Deactivated successfully.
Nov 29 06:25:07 compute-0 systemd[1]: session-37.scope: Consumed 2.646s CPU time.
Nov 29 06:25:07 compute-0 systemd-logind[797]: Session 37 logged out. Waiting for processes to exit.
Nov 29 06:25:07 compute-0 systemd-logind[797]: Removed session 37.
Nov 29 06:25:07 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:25:07 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 06:25:07 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:25:07.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 06:25:08 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v368: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:25:08 compute-0 sudo[109242]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:25:08 compute-0 sudo[109242]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:25:08 compute-0 sudo[109242]: pam_unix(sudo:session): session closed for user root
Nov 29 06:25:08 compute-0 sudo[109267]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:25:08 compute-0 sudo[109267]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:25:08 compute-0 sudo[109267]: pam_unix(sudo:session): session closed for user root
Nov 29 06:25:08 compute-0 sudo[109292]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:25:08 compute-0 sudo[109292]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:25:08 compute-0 sudo[109292]: pam_unix(sudo:session): session closed for user root
Nov 29 06:25:08 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} v 0) v1
Nov 29 06:25:08 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Nov 29 06:25:08 compute-0 ceph-mon[74654]: pgmap v363: 305 pgs: 1 active+remapped, 304 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 13 B/s, 0 objects/s recovering
Nov 29 06:25:08 compute-0 ceph-mon[74654]: pgmap v365: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:25:08 compute-0 ceph-mon[74654]: pgmap v366: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:25:08 compute-0 ceph-mon[74654]: 9.a scrub starts
Nov 29 06:25:08 compute-0 ceph-mon[74654]: 9.a scrub ok
Nov 29 06:25:08 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Nov 29 06:25:08 compute-0 ceph-mon[74654]: osdmap e126: 3 total, 3 up, 3 in
Nov 29 06:25:08 compute-0 sudo[109317]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 29 06:25:08 compute-0 sudo[109317]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:25:08 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e126 do_prune osdmap full prune enabled
Nov 29 06:25:08 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:25:08 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:25:08 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:25:08.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:25:09 compute-0 podman[109415]: 2025-11-29 06:25:09.078631787 +0000 UTC m=+0.098546893 container exec c3c8680245c67f710ba1b448e2d4c77c4c02bc368d31276f0332ad942957e3cf (image=quay.io/ceph/ceph:v18, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mon-compute-0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 29 06:25:09 compute-0 podman[109415]: 2025-11-29 06:25:09.210464144 +0000 UTC m=+0.230379250 container exec_died c3c8680245c67f710ba1b448e2d4c77c4c02bc368d31276f0332ad942957e3cf (image=quay.io/ceph/ceph:v18, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 29 06:25:09 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 06:25:09 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:25:09 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:25:09 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:25:09.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:25:10 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v369: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:25:10 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} v 0) v1
Nov 29 06:25:10 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Nov 29 06:25:10 compute-0 sshd-session[109436]: Invalid user zhangsan from 103.147.159.91 port 52964
Nov 29 06:25:10 compute-0 podman[109570]: 2025-11-29 06:25:10.523614247 +0000 UTC m=+0.651432907 container exec f5b8edcc79df1f136246f04a71d5e10f6a214865dd4162430c1b6090267d988f (image=quay.io/ceph/haproxy:2.3, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-haproxy-rgw-default-compute-0-zzbnoj)
Nov 29 06:25:10 compute-0 podman[109592]: 2025-11-29 06:25:10.699090602 +0000 UTC m=+0.159466541 container exec_died f5b8edcc79df1f136246f04a71d5e10f6a214865dd4162430c1b6090267d988f (image=quay.io/ceph/haproxy:2.3, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-haproxy-rgw-default-compute-0-zzbnoj)
Nov 29 06:25:10 compute-0 podman[109570]: 2025-11-29 06:25:10.722627272 +0000 UTC m=+0.850445902 container exec_died f5b8edcc79df1f136246f04a71d5e10f6a214865dd4162430c1b6090267d988f (image=quay.io/ceph/haproxy:2.3, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-haproxy-rgw-default-compute-0-zzbnoj)
Nov 29 06:25:10 compute-0 sshd-session[109436]: Received disconnect from 103.147.159.91 port 52964:11: Bye Bye [preauth]
Nov 29 06:25:10 compute-0 sshd-session[109436]: Disconnected from invalid user zhangsan 103.147.159.91 port 52964 [preauth]
Nov 29 06:25:10 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:25:10 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 06:25:10 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:25:10.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 06:25:11 compute-0 podman[109638]: 2025-11-29 06:25:11.065503543 +0000 UTC m=+0.061375572 container exec c5da9d8380f0eb7ca78841b66eaacc1789ab9c8fb67eaab27657426fdf51bade (image=quay.io/ceph/keepalived:2.2.4, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-keepalived-rgw-default-compute-0-uyqrbs, build-date=2023-02-22T09:23:20, io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=keepalived-container, description=keepalived for Ceph, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., version=2.2.4, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, name=keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.buildah.version=1.28.2, release=1793, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git)
Nov 29 06:25:11 compute-0 podman[109638]: 2025-11-29 06:25:11.475486179 +0000 UTC m=+0.471358168 container exec_died c5da9d8380f0eb7ca78841b66eaacc1789ab9c8fb67eaab27657426fdf51bade (image=quay.io/ceph/keepalived:2.2.4, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-keepalived-rgw-default-compute-0-uyqrbs, version=2.2.4, com.redhat.component=keepalived-container, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides keepalived on RHEL 9 for Ceph., description=keepalived for Ceph, io.openshift.expose-services=, name=keepalived, release=1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., io.k8s.display-name=Keepalived on RHEL 9, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, architecture=x86_64, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git, io.buildah.version=1.28.2)
Nov 29 06:25:11 compute-0 sudo[109317]: pam_unix(sudo:session): session closed for user root
Nov 29 06:25:11 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 06:25:11 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:25:11 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 06:25:11 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:25:11.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 06:25:12 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v370: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:25:12 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} v 0) v1
Nov 29 06:25:12 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Nov 29 06:25:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 06:25:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:25:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 06:25:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:25:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:25:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:25:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:25:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:25:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:25:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:25:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:25:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:25:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 06:25:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:25:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:25:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:25:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Nov 29 06:25:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:25:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 06:25:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:25:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:25:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:25:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 06:25:12 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:25:12 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:25:12 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:25:12.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:25:13 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:25:13 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:25:13 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:25:13.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:25:14 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v371: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:25:14 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} v 0) v1
Nov 29 06:25:14 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Nov 29 06:25:14 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Nov 29 06:25:14 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e127 e127: 3 total, 3 up, 3 in
Nov 29 06:25:14 compute-0 ceph-mon[74654]: pgmap v368: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:25:14 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Nov 29 06:25:14 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:25:14 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:25:15 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:25:14.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:25:15 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:25:15 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e127: 3 total, 3 up, 3 in
Nov 29 06:25:15 compute-0 sshd-session[109672]: Accepted publickey for zuul from 192.168.122.30 port 38358 ssh2: ECDSA SHA256:q0RMlXdalxA6snNWza7TmIndlwLWLLpO+sXhiGKqO/I
Nov 29 06:25:15 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 06:25:15 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 06:25:15 compute-0 systemd-logind[797]: New session 38 of user zuul.
Nov 29 06:25:15 compute-0 systemd[1]: Started Session 38 of User zuul.
Nov 29 06:25:15 compute-0 sshd-session[109672]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 06:25:15 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e127 do_prune osdmap full prune enabled
Nov 29 06:25:15 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:25:15 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:25:15 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:25:15.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:25:15 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:25:16 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 06:25:16 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Nov 29 06:25:16 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Nov 29 06:25:16 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Nov 29 06:25:16 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e128 e128: 3 total, 3 up, 3 in
Nov 29 06:25:16 compute-0 ceph-mon[74654]: pgmap v369: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:25:16 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Nov 29 06:25:16 compute-0 ceph-mon[74654]: pgmap v370: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:25:16 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Nov 29 06:25:16 compute-0 ceph-mon[74654]: pgmap v371: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:25:16 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Nov 29 06:25:16 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Nov 29 06:25:16 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:25:16 compute-0 ceph-mon[74654]: osdmap e127: 3 total, 3 up, 3 in
Nov 29 06:25:16 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:25:16 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e128: 3 total, 3 up, 3 in
Nov 29 06:25:16 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 06:25:16 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v374: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:25:16 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} v 0) v1
Nov 29 06:25:16 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Nov 29 06:25:16 compute-0 python3.9[109825]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 06:25:16 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 06:25:16 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e128 do_prune osdmap full prune enabled
Nov 29 06:25:17 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:25:17 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:25:17 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:25:16.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:25:17 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:25:17 compute-0 python3.9[109980]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 06:25:17 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:25:17 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:25:17 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:25:17.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:25:18 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v375: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:25:18 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} v 0) v1
Nov 29 06:25:18 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Nov 29 06:25:18 compute-0 sudo[110136]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wkwzgsodekmxihazmttjpcskfgsersnp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397517.9775686-85-211889810810302/AnsiballZ_setup.py'
Nov 29 06:25:18 compute-0 sudo[110136]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:25:18 compute-0 python3.9[110138]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 06:25:18 compute-0 sshd-session[109981]: Received disconnect from 104.208.108.166 port 52340:11: Bye Bye [preauth]
Nov 29 06:25:18 compute-0 sshd-session[109981]: Disconnected from authenticating user root 104.208.108.166 port 52340 [preauth]
Nov 29 06:25:18 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Nov 29 06:25:18 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e129 e129: 3 total, 3 up, 3 in
Nov 29 06:25:18 compute-0 sudo[110136]: pam_unix(sudo:session): session closed for user root
Nov 29 06:25:18 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:25:18 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Nov 29 06:25:18 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Nov 29 06:25:18 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Nov 29 06:25:18 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:25:18 compute-0 ceph-mon[74654]: osdmap e128: 3 total, 3 up, 3 in
Nov 29 06:25:18 compute-0 ceph-mon[74654]: pgmap v374: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:25:18 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Nov 29 06:25:19 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:25:19 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:25:19 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:25:19.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:25:19 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:25:19 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e129: 3 total, 3 up, 3 in
Nov 29 06:25:19 compute-0 sudo[110148]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:25:19 compute-0 sudo[110148]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:25:19 compute-0 sudo[110148]: pam_unix(sudo:session): session closed for user root
Nov 29 06:25:19 compute-0 sudo[110173]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:25:19 compute-0 sudo[110173]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:25:19 compute-0 sudo[110173]: pam_unix(sudo:session): session closed for user root
Nov 29 06:25:19 compute-0 sudo[110219]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:25:19 compute-0 sudo[110219]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:25:19 compute-0 sudo[110219]: pam_unix(sudo:session): session closed for user root
Nov 29 06:25:19 compute-0 sudo[110253]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 06:25:19 compute-0 sudo[110253]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:25:19 compute-0 sudo[110321]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hgmfzodsdjenjyyaytvuihjfaivakdts ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397517.9775686-85-211889810810302/AnsiballZ_dnf.py'
Nov 29 06:25:19 compute-0 sudo[110321]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:25:19 compute-0 python3.9[110323]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 06:25:19 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:25:19 compute-0 sudo[110253]: pam_unix(sudo:session): session closed for user root
Nov 29 06:25:19 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e129 do_prune osdmap full prune enabled
Nov 29 06:25:19 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:25:19 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:25:19 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:25:19.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:25:19 compute-0 sudo[110356]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:25:19 compute-0 sudo[110356]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:25:19 compute-0 sudo[110356]: pam_unix(sudo:session): session closed for user root
Nov 29 06:25:20 compute-0 sudo[110381]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:25:20 compute-0 sudo[110381]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:25:20 compute-0 sudo[110381]: pam_unix(sudo:session): session closed for user root
Nov 29 06:25:20 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v377: 305 pgs: 1 unknown, 304 active+clean; 456 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:25:20 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 06:25:20 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:25:20 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 06:25:20 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 06:25:20 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 06:25:21 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:25:21 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:25:21 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:25:21.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:25:21 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 129 pg[9.1e( empty local-lis/les=0/0 n=0 ec=58/47 lis/c=78/78 les/c/f=79/79/0 sis=129) [1] r=0 lpr=129 pi=[78,129)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:25:21 compute-0 sudo[110321]: pam_unix(sudo:session): session closed for user root
Nov 29 06:25:21 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:25:21 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:25:21 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:25:21.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:25:22 compute-0 ceph-mon[74654]: 9.d scrub starts
Nov 29 06:25:22 compute-0 ceph-mon[74654]: 9.d scrub ok
Nov 29 06:25:22 compute-0 ceph-mon[74654]: 9.f scrub starts
Nov 29 06:25:22 compute-0 ceph-mon[74654]: 9.f scrub ok
Nov 29 06:25:22 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:25:22 compute-0 ceph-mon[74654]: pgmap v375: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:25:22 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Nov 29 06:25:22 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Nov 29 06:25:22 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:25:22 compute-0 ceph-mon[74654]: osdmap e129: 3 total, 3 up, 3 in
Nov 29 06:25:22 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:25:22 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v378: 305 pgs: 1 unknown, 304 active+clean; 456 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:25:22 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Nov 29 06:25:22 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e130 e130: 3 total, 3 up, 3 in
Nov 29 06:25:22 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:25:22 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e130: 3 total, 3 up, 3 in
Nov 29 06:25:22 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev fdd25b2b-c4ab-4a08-b45f-a07c6dcc6a00 does not exist
Nov 29 06:25:22 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev d853bfb0-e303-41cb-90dd-f6e85a9398f0 does not exist
Nov 29 06:25:22 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev 46fcd6cd-2bac-4c20-92c6-eefc04520e9e does not exist
Nov 29 06:25:22 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 06:25:22 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 06:25:22 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 06:25:22 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 06:25:22 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 06:25:22 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:25:22 compute-0 sudo[110431]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:25:22 compute-0 sudo[110431]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:25:22 compute-0 sudo[110431]: pam_unix(sudo:session): session closed for user root
Nov 29 06:25:22 compute-0 sudo[110457]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:25:22 compute-0 sudo[110457]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:25:22 compute-0 sudo[110457]: pam_unix(sudo:session): session closed for user root
Nov 29 06:25:22 compute-0 sudo[110504]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:25:22 compute-0 sudo[110504]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:25:22 compute-0 sudo[110504]: pam_unix(sudo:session): session closed for user root
Nov 29 06:25:22 compute-0 sudo[110558]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Nov 29 06:25:22 compute-0 sudo[110558]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:25:23 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:25:23 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:25:23 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:25:23.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:25:23 compute-0 sudo[110711]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pxavhokmzwiohxrppnaazprautnkxjye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397522.662125-121-125511818058372/AnsiballZ_setup.py'
Nov 29 06:25:23 compute-0 sudo[110711]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:25:23 compute-0 podman[110670]: 2025-11-29 06:25:23.176959406 +0000 UTC m=+0.037794580 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:25:23 compute-0 python3.9[110713]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 06:25:23 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e130 do_prune osdmap full prune enabled
Nov 29 06:25:23 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:25:23 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:25:23 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:25:23.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:25:23 compute-0 podman[110670]: 2025-11-29 06:25:23.990855283 +0000 UTC m=+0.851690427 container create e7b4af43e3f642c71ac2958f57d09c7c1ddc6cc86d498fa01404ce27273bd56c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_gauss, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 29 06:25:23 compute-0 sudo[110711]: pam_unix(sudo:session): session closed for user root
Nov 29 06:25:23 compute-0 ceph-mon[74654]: pgmap v377: 305 pgs: 1 unknown, 304 active+clean; 456 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:25:23 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:25:23 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 06:25:23 compute-0 ceph-mon[74654]: pgmap v378: 305 pgs: 1 unknown, 304 active+clean; 456 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:25:23 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Nov 29 06:25:23 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:25:23 compute-0 ceph-mon[74654]: osdmap e130: 3 total, 3 up, 3 in
Nov 29 06:25:23 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 06:25:23 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 06:25:23 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:25:24 compute-0 systemd[1]: Started libpod-conmon-e7b4af43e3f642c71ac2958f57d09c7c1ddc6cc86d498fa01404ce27273bd56c.scope.
Nov 29 06:25:24 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e131 e131: 3 total, 3 up, 3 in
Nov 29 06:25:24 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e131: 3 total, 3 up, 3 in
Nov 29 06:25:24 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:25:24 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 131 pg[9.1e( empty local-lis/les=0/0 n=0 ec=58/47 lis/c=78/78 les/c/f=79/79/0 sis=131) [1]/[0] r=-1 lpr=131 pi=[78,131)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:25:24 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 131 pg[9.1e( empty local-lis/les=0/0 n=0 ec=58/47 lis/c=78/78 les/c/f=79/79/0 sis=131) [1]/[0] r=-1 lpr=131 pi=[78,131)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 06:25:24 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v381: 305 pgs: 2 unknown, 303 active+clean; 456 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:25:24 compute-0 podman[110670]: 2025-11-29 06:25:24.143018014 +0000 UTC m=+1.003853158 container init e7b4af43e3f642c71ac2958f57d09c7c1ddc6cc86d498fa01404ce27273bd56c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_gauss, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 29 06:25:24 compute-0 podman[110670]: 2025-11-29 06:25:24.15060238 +0000 UTC m=+1.011437514 container start e7b4af43e3f642c71ac2958f57d09c7c1ddc6cc86d498fa01404ce27273bd56c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_gauss, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 29 06:25:24 compute-0 podman[110670]: 2025-11-29 06:25:24.154434625 +0000 UTC m=+1.015269759 container attach e7b4af43e3f642c71ac2958f57d09c7c1ddc6cc86d498fa01404ce27273bd56c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_gauss, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 29 06:25:24 compute-0 heuristic_gauss[110783]: 167 167
Nov 29 06:25:24 compute-0 systemd[1]: libpod-e7b4af43e3f642c71ac2958f57d09c7c1ddc6cc86d498fa01404ce27273bd56c.scope: Deactivated successfully.
Nov 29 06:25:24 compute-0 podman[110670]: 2025-11-29 06:25:24.156733957 +0000 UTC m=+1.017569091 container died e7b4af43e3f642c71ac2958f57d09c7c1ddc6cc86d498fa01404ce27273bd56c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_gauss, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 29 06:25:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:25:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:25:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:25:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:25:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:25:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:25:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-4665e3c56c94b852808c214b15f6df8da5ef077be2b10e9bbd0f13f290823647-merged.mount: Deactivated successfully.
Nov 29 06:25:24 compute-0 podman[110670]: 2025-11-29 06:25:24.320456831 +0000 UTC m=+1.181291965 container remove e7b4af43e3f642c71ac2958f57d09c7c1ddc6cc86d498fa01404ce27273bd56c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_gauss, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 29 06:25:24 compute-0 systemd[1]: libpod-conmon-e7b4af43e3f642c71ac2958f57d09c7c1ddc6cc86d498fa01404ce27273bd56c.scope: Deactivated successfully.
Nov 29 06:25:24 compute-0 podman[110829]: 2025-11-29 06:25:24.451556859 +0000 UTC m=+0.019131272 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:25:24 compute-0 podman[110829]: 2025-11-29 06:25:24.552223068 +0000 UTC m=+0.119797421 container create ea440f92dd8dc50e31e825a3ee183a6da5d118456742852e38dd723173a6d29b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_mcnulty, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 29 06:25:24 compute-0 systemd[1]: Started libpod-conmon-ea440f92dd8dc50e31e825a3ee183a6da5d118456742852e38dd723173a6d29b.scope.
Nov 29 06:25:24 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:25:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65cb63fc6033e21838bb7304b258349c870aba4fe35c29a8f0a89c36cb1c930d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 06:25:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65cb63fc6033e21838bb7304b258349c870aba4fe35c29a8f0a89c36cb1c930d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:25:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65cb63fc6033e21838bb7304b258349c870aba4fe35c29a8f0a89c36cb1c930d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:25:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65cb63fc6033e21838bb7304b258349c870aba4fe35c29a8f0a89c36cb1c930d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 06:25:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65cb63fc6033e21838bb7304b258349c870aba4fe35c29a8f0a89c36cb1c930d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 06:25:24 compute-0 podman[110829]: 2025-11-29 06:25:24.638226338 +0000 UTC m=+0.205800771 container init ea440f92dd8dc50e31e825a3ee183a6da5d118456742852e38dd723173a6d29b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_mcnulty, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 29 06:25:24 compute-0 podman[110829]: 2025-11-29 06:25:24.648064486 +0000 UTC m=+0.215638869 container start ea440f92dd8dc50e31e825a3ee183a6da5d118456742852e38dd723173a6d29b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_mcnulty, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True)
Nov 29 06:25:24 compute-0 podman[110829]: 2025-11-29 06:25:24.694557001 +0000 UTC m=+0.262131364 container attach ea440f92dd8dc50e31e825a3ee183a6da5d118456742852e38dd723173a6d29b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_mcnulty, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 29 06:25:24 compute-0 sudo[110954]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xfldtynflpvwmgbuiczubllyzfqzsoun ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397524.422912-154-214367322103828/AnsiballZ_file.py'
Nov 29 06:25:24 compute-0 sudo[110954]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:25:25 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:25:25 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:25:25 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:25:25.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:25:25 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e131 do_prune osdmap full prune enabled
Nov 29 06:25:25 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e132 e132: 3 total, 3 up, 3 in
Nov 29 06:25:25 compute-0 ceph-mon[74654]: osdmap e131: 3 total, 3 up, 3 in
Nov 29 06:25:25 compute-0 ceph-mon[74654]: pgmap v381: 305 pgs: 2 unknown, 303 active+clean; 456 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:25:25 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e132: 3 total, 3 up, 3 in
Nov 29 06:25:25 compute-0 python3.9[110956]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:25:25 compute-0 sudo[110954]: pam_unix(sudo:session): session closed for user root
Nov 29 06:25:25 compute-0 priceless_mcnulty[110873]: --> passed data devices: 0 physical, 1 LVM
Nov 29 06:25:25 compute-0 priceless_mcnulty[110873]: --> relative data size: 1.0
Nov 29 06:25:25 compute-0 priceless_mcnulty[110873]: --> All data devices are unavailable
Nov 29 06:25:25 compute-0 systemd[1]: libpod-ea440f92dd8dc50e31e825a3ee183a6da5d118456742852e38dd723173a6d29b.scope: Deactivated successfully.
Nov 29 06:25:25 compute-0 podman[110829]: 2025-11-29 06:25:25.485539075 +0000 UTC m=+1.053113438 container died ea440f92dd8dc50e31e825a3ee183a6da5d118456742852e38dd723173a6d29b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_mcnulty, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 29 06:25:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-65cb63fc6033e21838bb7304b258349c870aba4fe35c29a8f0a89c36cb1c930d-merged.mount: Deactivated successfully.
Nov 29 06:25:25 compute-0 podman[110829]: 2025-11-29 06:25:25.67273157 +0000 UTC m=+1.240305933 container remove ea440f92dd8dc50e31e825a3ee183a6da5d118456742852e38dd723173a6d29b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_mcnulty, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 06:25:25 compute-0 systemd[1]: libpod-conmon-ea440f92dd8dc50e31e825a3ee183a6da5d118456742852e38dd723173a6d29b.scope: Deactivated successfully.
Nov 29 06:25:25 compute-0 sudo[110558]: pam_unix(sudo:session): session closed for user root
Nov 29 06:25:25 compute-0 sudo[111059]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:25:25 compute-0 sudo[111059]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:25:25 compute-0 sudo[111059]: pam_unix(sudo:session): session closed for user root
Nov 29 06:25:25 compute-0 sudo[111084]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:25:25 compute-0 sudo[111084]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:25:25 compute-0 sudo[111084]: pam_unix(sudo:session): session closed for user root
Nov 29 06:25:25 compute-0 sudo[111132]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:25:25 compute-0 sshd-session[110964]: Invalid user zhangsan from 138.124.186.225 port 33108
Nov 29 06:25:25 compute-0 sudo[111132]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:25:25 compute-0 sudo[111132]: pam_unix(sudo:session): session closed for user root
Nov 29 06:25:25 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:25:25 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:25:25 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:25:25.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:25:25 compute-0 sudo[111180]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -- lvm list --format json
Nov 29 06:25:25 compute-0 sudo[111180]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:25:25 compute-0 sshd-session[110964]: Received disconnect from 138.124.186.225 port 33108:11: Bye Bye [preauth]
Nov 29 06:25:25 compute-0 sshd-session[110964]: Disconnected from invalid user zhangsan 138.124.186.225 port 33108 [preauth]
Nov 29 06:25:25 compute-0 sudo[111232]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fsepevejmkkpabjgwfzdawahfifpngvi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397525.5027874-178-192651364156176/AnsiballZ_command.py'
Nov 29 06:25:25 compute-0 sudo[111232]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:25:26 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e132 do_prune osdmap full prune enabled
Nov 29 06:25:26 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v383: 305 pgs: 2 unknown, 303 active+clean; 456 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:25:26 compute-0 sshd-session[110896]: Received disconnect from 49.247.35.31 port 42968:11: Bye Bye [preauth]
Nov 29 06:25:26 compute-0 sshd-session[110896]: Disconnected from authenticating user root 49.247.35.31 port 42968 [preauth]
Nov 29 06:25:26 compute-0 python3.9[111234]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:25:26 compute-0 sudo[111232]: pam_unix(sudo:session): session closed for user root
Nov 29 06:25:26 compute-0 podman[111287]: 2025-11-29 06:25:26.281215297 +0000 UTC m=+0.022609516 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:25:26 compute-0 podman[111287]: 2025-11-29 06:25:26.512626725 +0000 UTC m=+0.254020934 container create b02f9a29b3a35d68f02aa0274f542b0ec0d067281632ad664d4b6972de9f6698 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_wilson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 29 06:25:26 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e133 e133: 3 total, 3 up, 3 in
Nov 29 06:25:26 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e133: 3 total, 3 up, 3 in
Nov 29 06:25:26 compute-0 systemd[1]: Started libpod-conmon-b02f9a29b3a35d68f02aa0274f542b0ec0d067281632ad664d4b6972de9f6698.scope.
Nov 29 06:25:26 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 133 pg[9.1e( v 56'1130 (0'0,56'1130] local-lis/les=0/0 n=5 ec=58/47 lis/c=131/78 les/c/f=132/79/0 sis=133) [1] r=0 lpr=133 pi=[78,133)/1 luod=0'0 crt=56'1130 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:25:26 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 133 pg[9.1e( v 56'1130 (0'0,56'1130] local-lis/les=0/0 n=5 ec=58/47 lis/c=131/78 les/c/f=132/79/0 sis=133) [1] r=0 lpr=133 pi=[78,133)/1 crt=56'1130 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:25:26 compute-0 ceph-mon[74654]: osdmap e132: 3 total, 3 up, 3 in
Nov 29 06:25:26 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:25:26 compute-0 podman[111287]: 2025-11-29 06:25:26.692919721 +0000 UTC m=+0.434313950 container init b02f9a29b3a35d68f02aa0274f542b0ec0d067281632ad664d4b6972de9f6698 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_wilson, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 06:25:26 compute-0 podman[111287]: 2025-11-29 06:25:26.701958936 +0000 UTC m=+0.443353145 container start b02f9a29b3a35d68f02aa0274f542b0ec0d067281632ad664d4b6972de9f6698 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_wilson, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 29 06:25:26 compute-0 podman[111287]: 2025-11-29 06:25:26.706762397 +0000 UTC m=+0.448156606 container attach b02f9a29b3a35d68f02aa0274f542b0ec0d067281632ad664d4b6972de9f6698 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_wilson, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 29 06:25:26 compute-0 eloquent_wilson[111379]: 167 167
Nov 29 06:25:26 compute-0 systemd[1]: libpod-b02f9a29b3a35d68f02aa0274f542b0ec0d067281632ad664d4b6972de9f6698.scope: Deactivated successfully.
Nov 29 06:25:26 compute-0 podman[111287]: 2025-11-29 06:25:26.71126003 +0000 UTC m=+0.452654239 container died b02f9a29b3a35d68f02aa0274f542b0ec0d067281632ad664d4b6972de9f6698 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_wilson, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 29 06:25:26 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 06:25:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-69a207eb2644393141811213d70d756919bb8b9352533d88782a8c8f64c0a281-merged.mount: Deactivated successfully.
Nov 29 06:25:26 compute-0 sudo[111468]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zjjaojcziphwkcutqtxuqzaipxxplivs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397526.4530287-202-67280056561102/AnsiballZ_stat.py'
Nov 29 06:25:26 compute-0 sudo[111468]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:25:27 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:25:27 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:25:27 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:25:27.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:25:27 compute-0 python3.9[111470]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:25:27 compute-0 sudo[111468]: pam_unix(sudo:session): session closed for user root
Nov 29 06:25:27 compute-0 podman[111287]: 2025-11-29 06:25:27.226587203 +0000 UTC m=+0.967981412 container remove b02f9a29b3a35d68f02aa0274f542b0ec0d067281632ad664d4b6972de9f6698 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_wilson, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 06:25:27 compute-0 systemd[1]: libpod-conmon-b02f9a29b3a35d68f02aa0274f542b0ec0d067281632ad664d4b6972de9f6698.scope: Deactivated successfully.
Nov 29 06:25:27 compute-0 sudo[111564]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rzfimucamyxdognkvsyecopxdttmlphr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397526.4530287-202-67280056561102/AnsiballZ_file.py'
Nov 29 06:25:27 compute-0 sudo[111564]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:25:27 compute-0 podman[111528]: 2025-11-29 06:25:27.420737246 +0000 UTC m=+0.059533171 container create 473a02aeec21dc113b27476c4bb98a7085ec407f6f0ef0f74a20e609c2ba3922 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_blackwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507)
Nov 29 06:25:27 compute-0 systemd[1]: Started libpod-conmon-473a02aeec21dc113b27476c4bb98a7085ec407f6f0ef0f74a20e609c2ba3922.scope.
Nov 29 06:25:27 compute-0 podman[111528]: 2025-11-29 06:25:27.385123447 +0000 UTC m=+0.023919382 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:25:27 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:25:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/395fc60bcc2310860ae083984018937c9ec9be027b31c9e6c96f234581df46a8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 06:25:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/395fc60bcc2310860ae083984018937c9ec9be027b31c9e6c96f234581df46a8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:25:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/395fc60bcc2310860ae083984018937c9ec9be027b31c9e6c96f234581df46a8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:25:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/395fc60bcc2310860ae083984018937c9ec9be027b31c9e6c96f234581df46a8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 06:25:27 compute-0 podman[111528]: 2025-11-29 06:25:27.499294814 +0000 UTC m=+0.138090759 container init 473a02aeec21dc113b27476c4bb98a7085ec407f6f0ef0f74a20e609c2ba3922 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_blackwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507)
Nov 29 06:25:27 compute-0 podman[111528]: 2025-11-29 06:25:27.511437564 +0000 UTC m=+0.150233489 container start 473a02aeec21dc113b27476c4bb98a7085ec407f6f0ef0f74a20e609c2ba3922 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_blackwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 06:25:27 compute-0 podman[111528]: 2025-11-29 06:25:27.515141475 +0000 UTC m=+0.153937400 container attach 473a02aeec21dc113b27476c4bb98a7085ec407f6f0ef0f74a20e609c2ba3922 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_blackwell, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 29 06:25:27 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e133 do_prune osdmap full prune enabled
Nov 29 06:25:27 compute-0 python3.9[111569]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:25:27 compute-0 sudo[111564]: pam_unix(sudo:session): session closed for user root
Nov 29 06:25:27 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e134 e134: 3 total, 3 up, 3 in
Nov 29 06:25:27 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e134: 3 total, 3 up, 3 in
Nov 29 06:25:27 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:25:27 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:25:27 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:25:27.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:25:28 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v386: 305 pgs: 1 peering, 1 unknown, 303 active+clean; 456 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:25:28 compute-0 sudo[111726]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-txgnnoqkkmepjescqzcnpcmvwmcuiqpa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397527.8982809-238-147072366995317/AnsiballZ_stat.py'
Nov 29 06:25:28 compute-0 sudo[111726]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:25:28 compute-0 hopeful_blackwell[111572]: {
Nov 29 06:25:28 compute-0 hopeful_blackwell[111572]:     "1": [
Nov 29 06:25:28 compute-0 hopeful_blackwell[111572]:         {
Nov 29 06:25:28 compute-0 hopeful_blackwell[111572]:             "devices": [
Nov 29 06:25:28 compute-0 hopeful_blackwell[111572]:                 "/dev/loop3"
Nov 29 06:25:28 compute-0 hopeful_blackwell[111572]:             ],
Nov 29 06:25:28 compute-0 hopeful_blackwell[111572]:             "lv_name": "ceph_lv0",
Nov 29 06:25:28 compute-0 hopeful_blackwell[111572]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 06:25:28 compute-0 hopeful_blackwell[111572]:             "lv_size": "7511998464",
Nov 29 06:25:28 compute-0 hopeful_blackwell[111572]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=336ec58c-893b-528f-a0c1-6ed1196bc047,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=91f280f1-e534-4adc-bf70-98711580c2dd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 06:25:28 compute-0 hopeful_blackwell[111572]:             "lv_uuid": "G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP",
Nov 29 06:25:28 compute-0 hopeful_blackwell[111572]:             "name": "ceph_lv0",
Nov 29 06:25:28 compute-0 hopeful_blackwell[111572]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 06:25:28 compute-0 hopeful_blackwell[111572]:             "tags": {
Nov 29 06:25:28 compute-0 hopeful_blackwell[111572]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 06:25:28 compute-0 hopeful_blackwell[111572]:                 "ceph.block_uuid": "G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP",
Nov 29 06:25:28 compute-0 hopeful_blackwell[111572]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 06:25:28 compute-0 hopeful_blackwell[111572]:                 "ceph.cluster_fsid": "336ec58c-893b-528f-a0c1-6ed1196bc047",
Nov 29 06:25:28 compute-0 hopeful_blackwell[111572]:                 "ceph.cluster_name": "ceph",
Nov 29 06:25:28 compute-0 hopeful_blackwell[111572]:                 "ceph.crush_device_class": "",
Nov 29 06:25:28 compute-0 hopeful_blackwell[111572]:                 "ceph.encrypted": "0",
Nov 29 06:25:28 compute-0 hopeful_blackwell[111572]:                 "ceph.osd_fsid": "91f280f1-e534-4adc-bf70-98711580c2dd",
Nov 29 06:25:28 compute-0 hopeful_blackwell[111572]:                 "ceph.osd_id": "1",
Nov 29 06:25:28 compute-0 hopeful_blackwell[111572]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 06:25:28 compute-0 hopeful_blackwell[111572]:                 "ceph.type": "block",
Nov 29 06:25:28 compute-0 hopeful_blackwell[111572]:                 "ceph.vdo": "0"
Nov 29 06:25:28 compute-0 hopeful_blackwell[111572]:             },
Nov 29 06:25:28 compute-0 hopeful_blackwell[111572]:             "type": "block",
Nov 29 06:25:28 compute-0 hopeful_blackwell[111572]:             "vg_name": "ceph_vg0"
Nov 29 06:25:28 compute-0 hopeful_blackwell[111572]:         }
Nov 29 06:25:28 compute-0 hopeful_blackwell[111572]:     ]
Nov 29 06:25:28 compute-0 hopeful_blackwell[111572]: }
Nov 29 06:25:28 compute-0 systemd[1]: libpod-473a02aeec21dc113b27476c4bb98a7085ec407f6f0ef0f74a20e609c2ba3922.scope: Deactivated successfully.
Nov 29 06:25:28 compute-0 podman[111528]: 2025-11-29 06:25:28.3444086 +0000 UTC m=+0.983204525 container died 473a02aeec21dc113b27476c4bb98a7085ec407f6f0ef0f74a20e609c2ba3922 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_blackwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 06:25:28 compute-0 ceph-mon[74654]: pgmap v383: 305 pgs: 2 unknown, 303 active+clean; 456 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:25:28 compute-0 ceph-mon[74654]: osdmap e133: 3 total, 3 up, 3 in
Nov 29 06:25:28 compute-0 python3.9[111729]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:25:28 compute-0 sudo[111726]: pam_unix(sudo:session): session closed for user root
Nov 29 06:25:28 compute-0 sudo[111819]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-desvnmwtjyjkosasxshkpfskxafxbohd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397527.8982809-238-147072366995317/AnsiballZ_file.py'
Nov 29 06:25:28 compute-0 sudo[111819]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:25:28 compute-0 python3.9[111821]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 06:25:28 compute-0 sudo[111819]: pam_unix(sudo:session): session closed for user root
Nov 29 06:25:29 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:25:29 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:25:29 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:25:29.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:25:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 06:25:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 06:25:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 06:25:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 06:25:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 06:25:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 06:25:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 06:25:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 06:25:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 06:25:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 06:25:29 compute-0 sudo[111972]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwxlbsdggiewqbffqhuzsgolrhgmctfd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397529.2815094-277-186366842265759/AnsiballZ_ini_file.py'
Nov 29 06:25:29 compute-0 sudo[111972]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:25:29 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:25:29 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 06:25:29 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:25:29.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 06:25:30 compute-0 python3.9[111974]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 29 06:25:30 compute-0 sudo[111972]: pam_unix(sudo:session): session closed for user root
Nov 29 06:25:30 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v387: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 8.3 KiB/s rd, 170 B/s wr, 15 op/s; 109 B/s, 2 objects/s recovering
Nov 29 06:25:30 compute-0 sudo[112124]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kubfqxzapnmapvkmlihtuczsxquhdfkc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397530.28991-277-54273157256012/AnsiballZ_ini_file.py'
Nov 29 06:25:30 compute-0 sudo[112124]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:25:30 compute-0 python3.9[112126]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 29 06:25:30 compute-0 sudo[112124]: pam_unix(sudo:session): session closed for user root
Nov 29 06:25:31 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:25:31 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 06:25:31 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:25:31.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 06:25:31 compute-0 sudo[112277]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-acdlmbccfurymkacxcmkqmkutrhhgwal ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397531.1467671-277-276751972321122/AnsiballZ_ini_file.py'
Nov 29 06:25:31 compute-0 sudo[112277]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:25:31 compute-0 python3.9[112279]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 29 06:25:31 compute-0 sudo[112277]: pam_unix(sudo:session): session closed for user root
Nov 29 06:25:31 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 06:25:31 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 134 pg[9.1e( v 56'1130 (0'0,56'1130] local-lis/les=133/134 n=5 ec=58/47 lis/c=131/78 les/c/f=132/79/0 sis=133) [1] r=0 lpr=133 pi=[78,133)/1 crt=56'1130 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:25:31 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:25:31 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:25:31 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:25:31.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:25:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-395fc60bcc2310860ae083984018937c9ec9be027b31c9e6c96f234581df46a8-merged.mount: Deactivated successfully.
Nov 29 06:25:32 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v388: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 62 B/s, 0 objects/s recovering
Nov 29 06:25:32 compute-0 sudo[112430]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vozsikdjxhvhnxtfikkxaswvmjeabfmh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397531.9382267-277-72947652499023/AnsiballZ_ini_file.py'
Nov 29 06:25:32 compute-0 sudo[112430]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:25:32 compute-0 python3.9[112432]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 29 06:25:32 compute-0 sudo[112430]: pam_unix(sudo:session): session closed for user root
Nov 29 06:25:32 compute-0 podman[111528]: 2025-11-29 06:25:32.446607777 +0000 UTC m=+5.085403702 container remove 473a02aeec21dc113b27476c4bb98a7085ec407f6f0ef0f74a20e609c2ba3922 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_blackwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 06:25:32 compute-0 systemd[1]: libpod-conmon-473a02aeec21dc113b27476c4bb98a7085ec407f6f0ef0f74a20e609c2ba3922.scope: Deactivated successfully.
Nov 29 06:25:32 compute-0 sudo[111180]: pam_unix(sudo:session): session closed for user root
Nov 29 06:25:32 compute-0 sudo[112457]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:25:32 compute-0 sudo[112457]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:25:32 compute-0 sudo[112457]: pam_unix(sudo:session): session closed for user root
Nov 29 06:25:32 compute-0 sudo[112482]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:25:32 compute-0 sudo[112482]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:25:32 compute-0 sudo[112482]: pam_unix(sudo:session): session closed for user root
Nov 29 06:25:32 compute-0 sudo[112507]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:25:32 compute-0 sudo[112507]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:25:32 compute-0 sudo[112507]: pam_unix(sudo:session): session closed for user root
Nov 29 06:25:32 compute-0 sudo[112532]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -- raw list --format json
Nov 29 06:25:32 compute-0 sudo[112532]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:25:33 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:25:33 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:25:33 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:25:33.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:25:33 compute-0 podman[112670]: 2025-11-29 06:25:33.024220195 +0000 UTC m=+0.022724339 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:25:33 compute-0 podman[112670]: 2025-11-29 06:25:33.23682429 +0000 UTC m=+0.235328414 container create 1f6c322f8f62420c7a3d85fa9ee245847bcb76e790b1d63628b7cc617a70798e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_elbakyan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 06:25:33 compute-0 sudo[112734]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-euvhfwkiqpwzpkjvxnymyhjhpviyjdql ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397532.7444956-370-227812809853090/AnsiballZ_dnf.py'
Nov 29 06:25:33 compute-0 sudo[112734]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:25:33 compute-0 python3.9[112736]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 06:25:33 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:25:33 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:25:33 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:25:33.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:25:34 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v389: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:25:34 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 29 06:25:34 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 06:25:34 compute-0 systemd[1]: Started libpod-conmon-1f6c322f8f62420c7a3d85fa9ee245847bcb76e790b1d63628b7cc617a70798e.scope.
Nov 29 06:25:34 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:25:34 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e134 do_prune osdmap full prune enabled
Nov 29 06:25:35 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:25:35 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:25:35 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:25:35.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:25:35 compute-0 ceph-mon[74654]: osdmap e134: 3 total, 3 up, 3 in
Nov 29 06:25:35 compute-0 ceph-mon[74654]: pgmap v386: 305 pgs: 1 peering, 1 unknown, 303 active+clean; 456 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:25:35 compute-0 podman[112670]: 2025-11-29 06:25:35.321024534 +0000 UTC m=+2.319528728 container init 1f6c322f8f62420c7a3d85fa9ee245847bcb76e790b1d63628b7cc617a70798e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_elbakyan, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 06:25:35 compute-0 podman[112670]: 2025-11-29 06:25:35.332808795 +0000 UTC m=+2.331312959 container start 1f6c322f8f62420c7a3d85fa9ee245847bcb76e790b1d63628b7cc617a70798e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_elbakyan, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 29 06:25:35 compute-0 bold_elbakyan[112740]: 167 167
Nov 29 06:25:35 compute-0 systemd[1]: libpod-1f6c322f8f62420c7a3d85fa9ee245847bcb76e790b1d63628b7cc617a70798e.scope: Deactivated successfully.
Nov 29 06:25:35 compute-0 sudo[112734]: pam_unix(sudo:session): session closed for user root
Nov 29 06:25:35 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:25:35 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:25:35 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:25:35.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:25:36 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v390: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:25:36 compute-0 podman[112670]: 2025-11-29 06:25:36.229194576 +0000 UTC m=+3.227698710 container attach 1f6c322f8f62420c7a3d85fa9ee245847bcb76e790b1d63628b7cc617a70798e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_elbakyan, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 06:25:36 compute-0 podman[112670]: 2025-11-29 06:25:36.230501502 +0000 UTC m=+3.229005666 container died 1f6c322f8f62420c7a3d85fa9ee245847bcb76e790b1d63628b7cc617a70798e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_elbakyan, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 06:25:36 compute-0 sudo[112908]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-utswbhpguqpmqclgfghwllyjzdbbbwni ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397536.4665883-403-41560498135578/AnsiballZ_setup.py'
Nov 29 06:25:36 compute-0 sudo[112908]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:25:36 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 29 06:25:36 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 06:25:36 compute-0 sshd-session[112744]: Received disconnect from 58.210.98.130 port 62874:11: Bye Bye [preauth]
Nov 29 06:25:36 compute-0 sshd-session[112744]: Disconnected from authenticating user root 58.210.98.130 port 62874 [preauth]
Nov 29 06:25:37 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:25:37 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 06:25:37 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:25:37.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 06:25:37 compute-0 python3.9[112910]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 06:25:37 compute-0 sudo[112908]: pam_unix(sudo:session): session closed for user root
Nov 29 06:25:37 compute-0 sudo[113063]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jewueuqdeoblmaqifnbkexvoftndsrlz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397537.375559-427-119305429605850/AnsiballZ_stat.py'
Nov 29 06:25:37 compute-0 sudo[113063]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:25:37 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:25:37 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:25:37 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:25:37.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:25:38 compute-0 python3.9[113065]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 06:25:38 compute-0 sudo[113063]: pam_unix(sudo:session): session closed for user root
Nov 29 06:25:38 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v391: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:25:38 compute-0 sudo[113215]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qesdwuxkwehlzqjydmwuzyzzjnrvftng ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397538.363464-454-39140785480420/AnsiballZ_stat.py'
Nov 29 06:25:38 compute-0 sudo[113215]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:25:38 compute-0 python3.9[113217]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 06:25:38 compute-0 sudo[113215]: pam_unix(sudo:session): session closed for user root
Nov 29 06:25:39 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:25:39 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:25:39 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:25:39.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:25:39 compute-0 sudo[113368]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dmcdzqcvmuxgylqdfsyuwasckzyufldb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397539.1867013-484-15673258137538/AnsiballZ_command.py'
Nov 29 06:25:39 compute-0 sudo[113368]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:25:39 compute-0 python3.9[113370]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:25:39 compute-0 sudo[113368]: pam_unix(sudo:session): session closed for user root
Nov 29 06:25:39 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:25:39 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 06:25:39 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:25:39.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 06:25:40 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v392: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:25:40 compute-0 sudo[113442]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:25:40 compute-0 sudo[113442]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:25:40 compute-0 sudo[113442]: pam_unix(sudo:session): session closed for user root
Nov 29 06:25:40 compute-0 sudo[113473]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:25:40 compute-0 sudo[113473]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:25:40 compute-0 sudo[113473]: pam_unix(sudo:session): session closed for user root
Nov 29 06:25:40 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 29 06:25:40 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 06:25:40 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 06:25:40 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e135 e135: 3 total, 3 up, 3 in
Nov 29 06:25:40 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e135: 3 total, 3 up, 3 in
Nov 29 06:25:40 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 29 06:25:40 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 06:25:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-fa815d34c3e44f00ca9de99b70b70bc562b0283776134a8925600a9a3ec8ed36-merged.mount: Deactivated successfully.
Nov 29 06:25:40 compute-0 podman[112670]: 2025-11-29 06:25:40.515994028 +0000 UTC m=+7.514498172 container remove 1f6c322f8f62420c7a3d85fa9ee245847bcb76e790b1d63628b7cc617a70798e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_elbakyan, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 06:25:40 compute-0 sudo[113574]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-itzpohvetozndvlftbjynqjdvtxjdiff ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397540.062356-514-89405490451485/AnsiballZ_service_facts.py'
Nov 29 06:25:40 compute-0 sudo[113574]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:25:40 compute-0 systemd[1]: libpod-conmon-1f6c322f8f62420c7a3d85fa9ee245847bcb76e790b1d63628b7cc617a70798e.scope: Deactivated successfully.
Nov 29 06:25:40 compute-0 podman[113582]: 2025-11-29 06:25:40.665751504 +0000 UTC m=+0.028008644 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:25:40 compute-0 python3.9[113576]: ansible-service_facts Invoked
Nov 29 06:25:40 compute-0 network[113613]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 29 06:25:40 compute-0 network[113614]: 'network-scripts' will be removed from distribution in near future.
Nov 29 06:25:40 compute-0 network[113615]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 29 06:25:41 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:25:41 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 06:25:41 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:25:41.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 06:25:41 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 135 pg[9.1f( empty local-lis/les=0/0 n=0 ec=58/47 lis/c=98/98 les/c/f=99/99/0 sis=135) [1] r=0 lpr=135 pi=[98,135)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:25:41 compute-0 ceph-mon[74654]: pgmap v387: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 8.3 KiB/s rd, 170 B/s wr, 15 op/s; 109 B/s, 2 objects/s recovering
Nov 29 06:25:41 compute-0 ceph-mon[74654]: 9.1f deep-scrub starts
Nov 29 06:25:41 compute-0 ceph-mon[74654]: 9.1f deep-scrub ok
Nov 29 06:25:41 compute-0 ceph-mon[74654]: pgmap v388: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 62 B/s, 0 objects/s recovering
Nov 29 06:25:41 compute-0 ceph-mon[74654]: 9.10 scrub starts
Nov 29 06:25:41 compute-0 ceph-mon[74654]: 9.1d scrub starts
Nov 29 06:25:41 compute-0 ceph-mon[74654]: 9.1d scrub ok
Nov 29 06:25:41 compute-0 ceph-mon[74654]: 9.10 scrub ok
Nov 29 06:25:41 compute-0 ceph-mon[74654]: pgmap v389: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:25:41 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 06:25:41 compute-0 podman[113582]: 2025-11-29 06:25:41.256639743 +0000 UTC m=+0.618896863 container create d3aec66fa4f6311cfcce4a377f06332f6acf81bd52f7e454782157b30e2c1e54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_snyder, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 29 06:25:41 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:25:41 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:25:41 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:25:41.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:25:42 compute-0 systemd[1]: Started libpod-conmon-d3aec66fa4f6311cfcce4a377f06332f6acf81bd52f7e454782157b30e2c1e54.scope.
Nov 29 06:25:42 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v394: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:25:42 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:25:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39002653caa0ab45dc80471b6e03286d68cbb9fc1563cf2991782195e6a41e20/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 06:25:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39002653caa0ab45dc80471b6e03286d68cbb9fc1563cf2991782195e6a41e20/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:25:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39002653caa0ab45dc80471b6e03286d68cbb9fc1563cf2991782195e6a41e20/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:25:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39002653caa0ab45dc80471b6e03286d68cbb9fc1563cf2991782195e6a41e20/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 06:25:42 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e135 do_prune osdmap full prune enabled
Nov 29 06:25:43 compute-0 podman[113582]: 2025-11-29 06:25:43.016765798 +0000 UTC m=+2.379022938 container init d3aec66fa4f6311cfcce4a377f06332f6acf81bd52f7e454782157b30e2c1e54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_snyder, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 06:25:43 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:25:43 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:25:43 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:25:43.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:25:43 compute-0 podman[113582]: 2025-11-29 06:25:43.035123748 +0000 UTC m=+2.397380868 container start d3aec66fa4f6311cfcce4a377f06332f6acf81bd52f7e454782157b30e2c1e54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_snyder, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 06:25:43 compute-0 podman[113582]: 2025-11-29 06:25:43.398762363 +0000 UTC m=+2.761019493 container attach d3aec66fa4f6311cfcce4a377f06332f6acf81bd52f7e454782157b30e2c1e54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_snyder, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 29 06:25:43 compute-0 epic_snyder[113644]: {
Nov 29 06:25:43 compute-0 epic_snyder[113644]:     "91f280f1-e534-4adc-bf70-98711580c2dd": {
Nov 29 06:25:43 compute-0 epic_snyder[113644]:         "ceph_fsid": "336ec58c-893b-528f-a0c1-6ed1196bc047",
Nov 29 06:25:43 compute-0 epic_snyder[113644]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 06:25:43 compute-0 epic_snyder[113644]:         "osd_id": 1,
Nov 29 06:25:43 compute-0 epic_snyder[113644]:         "osd_uuid": "91f280f1-e534-4adc-bf70-98711580c2dd",
Nov 29 06:25:43 compute-0 epic_snyder[113644]:         "type": "bluestore"
Nov 29 06:25:43 compute-0 epic_snyder[113644]:     }
Nov 29 06:25:43 compute-0 epic_snyder[113644]: }
Nov 29 06:25:43 compute-0 sudo[113574]: pam_unix(sudo:session): session closed for user root
Nov 29 06:25:43 compute-0 systemd[1]: libpod-d3aec66fa4f6311cfcce4a377f06332f6acf81bd52f7e454782157b30e2c1e54.scope: Deactivated successfully.
Nov 29 06:25:43 compute-0 podman[113582]: 2025-11-29 06:25:43.889124177 +0000 UTC m=+3.251381307 container died d3aec66fa4f6311cfcce4a377f06332f6acf81bd52f7e454782157b30e2c1e54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_snyder, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 29 06:25:43 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:25:43 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:25:43 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:25:43.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:25:44 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v395: 305 pgs: 1 unknown, 304 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:25:45 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:25:45 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 06:25:45 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:25:45.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 06:25:45 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:25:45 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 06:25:45 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:25:45.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 06:25:46 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v396: 305 pgs: 1 unknown, 304 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:25:46 compute-0 sshd-session[113787]: Received disconnect from 79.116.35.29 port 46872:11: Bye Bye [preauth]
Nov 29 06:25:46 compute-0 sshd-session[113787]: Disconnected from authenticating user root 79.116.35.29 port 46872 [preauth]
Nov 29 06:25:46 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 06:25:46 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 06:25:46 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 06:25:46 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e136 e136: 3 total, 3 up, 3 in
Nov 29 06:25:46 compute-0 ceph-mon[74654]: pgmap v390: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:25:46 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 06:25:46 compute-0 ceph-mon[74654]: 9.11 scrub starts
Nov 29 06:25:46 compute-0 ceph-mon[74654]: 9.11 scrub ok
Nov 29 06:25:46 compute-0 ceph-mon[74654]: pgmap v391: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:25:46 compute-0 ceph-mon[74654]: pgmap v392: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:25:46 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 06:25:46 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 06:25:46 compute-0 ceph-mon[74654]: osdmap e135: 3 total, 3 up, 3 in
Nov 29 06:25:46 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 06:25:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-39002653caa0ab45dc80471b6e03286d68cbb9fc1563cf2991782195e6a41e20-merged.mount: Deactivated successfully.
Nov 29 06:25:46 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e136: 3 total, 3 up, 3 in
Nov 29 06:25:46 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 136 pg[9.1f( empty local-lis/les=0/0 n=0 ec=58/47 lis/c=98/98 les/c/f=99/99/0 sis=136) [1]/[0] r=-1 lpr=136 pi=[98,136)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:25:46 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 136 pg[9.1f( empty local-lis/les=0/0 n=0 ec=58/47 lis/c=98/98 les/c/f=99/99/0 sis=136) [1]/[0] r=-1 lpr=136 pi=[98,136)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 06:25:47 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:25:47 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:25:47 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:25:47.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:25:47 compute-0 sudo[113939]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mxztmksvbujlwzmiktxkqpiaypvdxxzy ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1764397546.720474-559-102955465653603/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1764397546.720474-559-102955465653603/args'
Nov 29 06:25:47 compute-0 sudo[113939]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:25:47 compute-0 sudo[113939]: pam_unix(sudo:session): session closed for user root
Nov 29 06:25:47 compute-0 sudo[114107]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmyvqvspjcbzuuohealoanihyvuvpgbp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397547.6414602-592-46676824987325/AnsiballZ_dnf.py'
Nov 29 06:25:47 compute-0 sudo[114107]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:25:47 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:25:47 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:25:47 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:25:47.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:25:48 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v398: 305 pgs: 1 unknown, 304 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:25:48 compute-0 python3.9[114109]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 06:25:49 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:25:49 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:25:49 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:25:49.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:25:49 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:25:49 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 06:25:49 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:25:49.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 06:25:50 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v399: 305 pgs: 1 unknown, 304 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:25:50 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:25:50 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e136 do_prune osdmap full prune enabled
Nov 29 06:25:51 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:25:51 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 06:25:51 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:25:51.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 06:25:51 compute-0 sudo[114107]: pam_unix(sudo:session): session closed for user root
Nov 29 06:25:51 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e137 e137: 3 total, 3 up, 3 in
Nov 29 06:25:51 compute-0 ceph-mon[74654]: pgmap v394: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:25:51 compute-0 ceph-mon[74654]: pgmap v395: 305 pgs: 1 unknown, 304 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:25:51 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 06:25:51 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 06:25:51 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 06:25:51 compute-0 podman[113582]: 2025-11-29 06:25:51.607958961 +0000 UTC m=+10.970216111 container remove d3aec66fa4f6311cfcce4a377f06332f6acf81bd52f7e454782157b30e2c1e54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_snyder, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 06:25:51 compute-0 sudo[112532]: pam_unix(sudo:session): session closed for user root
Nov 29 06:25:51 compute-0 systemd[1]: libpod-conmon-d3aec66fa4f6311cfcce4a377f06332f6acf81bd52f7e454782157b30e2c1e54.scope: Deactivated successfully.
Nov 29 06:25:51 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e137: 3 total, 3 up, 3 in
Nov 29 06:25:51 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 06:25:51 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:25:51 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:25:51 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:25:51.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:25:52 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v401: 305 pgs: 1 unknown, 304 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:25:52 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:25:52 compute-0 sudo[114263]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gomtgdmfbqahmgpeqsjiuenivawrshhg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397551.6583242-631-100368266473131/AnsiballZ_package_facts.py'
Nov 29 06:25:52 compute-0 sudo[114263]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:25:52 compute-0 python3.9[114265]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Nov 29 06:25:52 compute-0 sudo[114263]: pam_unix(sudo:session): session closed for user root
Nov 29 06:25:53 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 06:25:53 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:25:53 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:25:53 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:25:53.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:25:53 compute-0 ceph-mon[74654]: pgmap v396: 305 pgs: 1 unknown, 304 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:25:53 compute-0 ceph-mon[74654]: osdmap e136: 3 total, 3 up, 3 in
Nov 29 06:25:53 compute-0 ceph-mon[74654]: pgmap v398: 305 pgs: 1 unknown, 304 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:25:53 compute-0 ceph-mon[74654]: pgmap v399: 305 pgs: 1 unknown, 304 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:25:53 compute-0 ceph-mon[74654]: osdmap e137: 3 total, 3 up, 3 in
Nov 29 06:25:53 compute-0 ceph-mon[74654]: pgmap v401: 305 pgs: 1 unknown, 304 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:25:53 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:25:53 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:25:53 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:25:53 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:25:53.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:25:54 compute-0 sudo[114416]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jgywabyjspxbabgmopbmtgukphgxrvjo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397553.6486027-661-64243263988622/AnsiballZ_stat.py'
Nov 29 06:25:54 compute-0 sudo[114416]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:25:54 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:25:54 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev 6ac88d45-3d8a-4824-ba5d-33b78eb582e9 does not exist
Nov 29 06:25:54 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev cc71a288-4ddc-46fd-a55c-e9f907082bdb does not exist
Nov 29 06:25:54 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev d8c8119c-3329-41d0-af59-22fcd62acf40 does not exist
Nov 29 06:25:54 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v402: 305 pgs: 1 unknown, 304 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:25:54 compute-0 ceph-mgr[74948]: [balancer INFO root] Optimize plan auto_2025-11-29_06:25:54
Nov 29 06:25:54 compute-0 ceph-mgr[74948]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 06:25:54 compute-0 ceph-mgr[74948]: [balancer INFO root] Some PGs (0.003279) are unknown; try again later
Nov 29 06:25:54 compute-0 sudo[114419]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:25:54 compute-0 sudo[114419]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:25:54 compute-0 sudo[114419]: pam_unix(sudo:session): session closed for user root
Nov 29 06:25:54 compute-0 sudo[114444]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 06:25:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:25:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:25:54 compute-0 sudo[114444]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:25:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:25:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:25:54 compute-0 sudo[114444]: pam_unix(sudo:session): session closed for user root
Nov 29 06:25:54 compute-0 python3.9[114418]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:25:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:25:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:25:54 compute-0 sudo[114416]: pam_unix(sudo:session): session closed for user root
Nov 29 06:25:54 compute-0 sudo[114544]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sjfvrrfhzyzhjhozdxolukjkwvmngvtg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397553.6486027-661-64243263988622/AnsiballZ_file.py'
Nov 29 06:25:54 compute-0 sudo[114544]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:25:54 compute-0 python3.9[114546]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:25:54 compute-0 sudo[114544]: pam_unix(sudo:session): session closed for user root
Nov 29 06:25:54 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e137 do_prune osdmap full prune enabled
Nov 29 06:25:55 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:25:55 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 06:25:55 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:25:55.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 06:25:55 compute-0 sudo[114697]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mibmizzdeawmtgmjhshikleyoixyqega ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397555.03104-697-24032645835763/AnsiballZ_stat.py'
Nov 29 06:25:55 compute-0 sudo[114697]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:25:55 compute-0 python3.9[114699]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:25:55 compute-0 sudo[114697]: pam_unix(sudo:session): session closed for user root
Nov 29 06:25:55 compute-0 sudo[114775]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wahzhehsssxzwlrtalwpjzmwpcnbcewv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397555.03104-697-24032645835763/AnsiballZ_file.py'
Nov 29 06:25:55 compute-0 sudo[114775]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:25:56 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:25:56 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 06:25:56 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:25:55.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 06:25:56 compute-0 python3.9[114777]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/chronyd _original_basename=chronyd.sysconfig.j2 recurse=False state=file path=/etc/sysconfig/chronyd force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:25:56 compute-0 sudo[114775]: pam_unix(sudo:session): session closed for user root
Nov 29 06:25:56 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v403: 305 pgs: 1 unknown, 304 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:25:57 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:25:57 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:25:57 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:25:57.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:25:57 compute-0 sudo[114928]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cpfrbedclaykbbmfzbcoklpyohagibtp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397557.2178767-751-185601253126908/AnsiballZ_lineinfile.py'
Nov 29 06:25:57 compute-0 sudo[114928]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:25:58 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:25:58 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:25:58 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:25:58.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:25:58 compute-0 python3.9[114930]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:25:58 compute-0 sudo[114928]: pam_unix(sudo:session): session closed for user root
Nov 29 06:25:58 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v404: 305 pgs: 1 active+remapped, 304 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 10 B/s, 0 objects/s recovering
Nov 29 06:25:59 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:25:59 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:25:59 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:25:59.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:25:59 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e138 e138: 3 total, 3 up, 3 in
Nov 29 06:25:59 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:25:59 compute-0 sudo[115081]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ijydrptywroypfnesbnvpdgiitevlhlb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397559.4015238-796-33592647736030/AnsiballZ_setup.py'
Nov 29 06:25:59 compute-0 sudo[115081]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:26:00 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:26:00 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:26:00 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:26:00.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:26:00 compute-0 python3.9[115083]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 06:26:00 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v406: 305 pgs: 1 active+remapped, 304 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 12 B/s, 0 objects/s recovering
Nov 29 06:26:00 compute-0 sudo[115092]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:26:00 compute-0 sudo[115092]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:26:00 compute-0 sudo[115092]: pam_unix(sudo:session): session closed for user root
Nov 29 06:26:00 compute-0 sudo[115081]: pam_unix(sudo:session): session closed for user root
Nov 29 06:26:00 compute-0 sudo[115117]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:26:00 compute-0 sudo[115117]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:26:00 compute-0 sudo[115117]: pam_unix(sudo:session): session closed for user root
Nov 29 06:26:01 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:26:01 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:26:01 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:26:01.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:26:01 compute-0 sudo[115216]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-driuutkjsjandxpwkxmwbqeuqjpjddnq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397559.4015238-796-33592647736030/AnsiballZ_systemd.py'
Nov 29 06:26:01 compute-0 sudo[115216]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:26:01 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e138: 3 total, 3 up, 3 in
Nov 29 06:26:01 compute-0 python3.9[115218]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 06:26:01 compute-0 sudo[115216]: pam_unix(sudo:session): session closed for user root
Nov 29 06:26:02 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:26:02 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:26:02 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:26:02.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:26:02 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v407: 305 pgs: 1 active+remapped, 304 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 10 B/s, 0 objects/s recovering
Nov 29 06:26:02 compute-0 sshd-session[109675]: Connection closed by 192.168.122.30 port 38358
Nov 29 06:26:02 compute-0 sshd-session[109672]: pam_unix(sshd:session): session closed for user zuul
Nov 29 06:26:02 compute-0 systemd[1]: session-38.scope: Deactivated successfully.
Nov 29 06:26:02 compute-0 systemd[1]: session-38.scope: Consumed 26.215s CPU time.
Nov 29 06:26:02 compute-0 systemd-logind[797]: Session 38 logged out. Waiting for processes to exit.
Nov 29 06:26:02 compute-0 systemd-logind[797]: Removed session 38.
Nov 29 06:26:02 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 138 pg[9.1f( v 56'1130 (0'0,56'1130] local-lis/les=0/0 n=5 ec=58/47 lis/c=136/98 les/c/f=137/99/0 sis=138) [1] r=0 lpr=138 pi=[98,138)/1 luod=0'0 crt=56'1130 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:26:02 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 138 pg[9.1f( v 56'1130 (0'0,56'1130] local-lis/les=0/0 n=5 ec=58/47 lis/c=136/98 les/c/f=137/99/0 sis=138) [1] r=0 lpr=138 pi=[98,138)/1 crt=56'1130 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:26:02 compute-0 sshd-session[71617]: Received disconnect from 38.102.83.107 port 45836:11: disconnected by user
Nov 29 06:26:02 compute-0 sshd-session[71617]: Disconnected from user zuul 38.102.83.107 port 45836
Nov 29 06:26:02 compute-0 sshd-session[71614]: pam_unix(sshd:session): session closed for user zuul
Nov 29 06:26:02 compute-0 systemd[1]: session-18.scope: Deactivated successfully.
Nov 29 06:26:02 compute-0 systemd[1]: session-18.scope: Consumed 1min 24.311s CPU time.
Nov 29 06:26:02 compute-0 systemd-logind[797]: Session 18 logged out. Waiting for processes to exit.
Nov 29 06:26:02 compute-0 systemd-logind[797]: Removed session 18.
Nov 29 06:26:03 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:26:03 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:26:03 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:26:03.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:26:03 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:26:03 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e138 do_prune osdmap full prune enabled
Nov 29 06:26:04 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:26:04 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:26:04 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:26:04.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:26:04 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v408: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:26:04 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 e139: 3 total, 3 up, 3 in
Nov 29 06:26:04 compute-0 ceph-mon[74654]: pgmap v402: 305 pgs: 1 unknown, 304 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:26:04 compute-0 ceph-mon[74654]: pgmap v403: 305 pgs: 1 unknown, 304 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:26:04 compute-0 ceph-mon[74654]: pgmap v404: 305 pgs: 1 active+remapped, 304 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 10 B/s, 0 objects/s recovering
Nov 29 06:26:05 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:26:05 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:26:05 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:26:05.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:26:05 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e139: 3 total, 3 up, 3 in
Nov 29 06:26:05 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 139 pg[9.1f( v 56'1130 (0'0,56'1130] local-lis/les=138/139 n=5 ec=58/47 lis/c=136/98 les/c/f=137/99/0 sis=138) [1] r=0 lpr=138 pi=[98,138)/1 crt=56'1130 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:26:06 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:26:06 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:26:06 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:26:06.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:26:06 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v410: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:26:07 compute-0 ceph-mon[74654]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 06:26:07 compute-0 ceph-mon[74654]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Cumulative writes: 1703 writes, 8021 keys, 1703 commit groups, 1.0 writes per commit group, ingest: 0.01 GB, 0.02 MB/s
                                           Cumulative WAL: 1703 writes, 1703 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1703 writes, 8021 keys, 1703 commit groups, 1.0 writes per commit group, ingest: 11.38 MB, 0.02 MB/s
                                           Interval WAL: 1703 writes, 1703 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0   55.46 KB   0.5      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     12.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      2/0   55.46 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     12.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     12.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e1a58311f0#2 capacity: 304.00 MB usage: 57.08 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(10,56.30 KB,0.0180847%) FilterBlock(2,0.42 KB,0.000135522%) IndexBlock(2,0.36 KB,0.000115445%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 29 06:26:07 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:26:07 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:26:07 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:26:07.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:26:08 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:26:08 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:26:08 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:26:08.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:26:08 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v411: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:26:08 compute-0 sshd-session[115250]: Accepted publickey for zuul from 192.168.122.30 port 47582 ssh2: ECDSA SHA256:q0RMlXdalxA6snNWza7TmIndlwLWLLpO+sXhiGKqO/I
Nov 29 06:26:08 compute-0 systemd-logind[797]: New session 39 of user zuul.
Nov 29 06:26:08 compute-0 systemd[1]: Started Session 39 of User zuul.
Nov 29 06:26:08 compute-0 sshd-session[115250]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 06:26:08 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:26:09 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:26:09 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:26:09 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:26:09.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:26:09 compute-0 sshd-session[115248]: Invalid user admin123 from 31.6.212.12 port 54110
Nov 29 06:26:09 compute-0 sshd-session[115248]: Received disconnect from 31.6.212.12 port 54110:11: Bye Bye [preauth]
Nov 29 06:26:09 compute-0 sshd-session[115248]: Disconnected from invalid user admin123 31.6.212.12 port 54110 [preauth]
Nov 29 06:26:09 compute-0 sudo[115404]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-arihmqxamjgemjvfcmeoqztkzmcdocjk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397568.9176931-31-78783639096378/AnsiballZ_file.py'
Nov 29 06:26:09 compute-0 sudo[115404]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:26:09 compute-0 ceph-mon[74654]: pgmap v406: 305 pgs: 1 active+remapped, 304 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 12 B/s, 0 objects/s recovering
Nov 29 06:26:09 compute-0 ceph-mon[74654]: osdmap e138: 3 total, 3 up, 3 in
Nov 29 06:26:09 compute-0 ceph-mon[74654]: pgmap v407: 305 pgs: 1 active+remapped, 304 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 10 B/s, 0 objects/s recovering
Nov 29 06:26:09 compute-0 ceph-mon[74654]: pgmap v408: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:26:09 compute-0 ceph-mon[74654]: osdmap e139: 3 total, 3 up, 3 in
Nov 29 06:26:09 compute-0 python3.9[115406]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:26:09 compute-0 sudo[115404]: pam_unix(sudo:session): session closed for user root
Nov 29 06:26:10 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:26:10 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:26:10 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:26:10.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:26:10 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v412: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:26:10 compute-0 sudo[115556]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uwdwqkkkvxzlujrhaicznhdqcplnqvkn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397569.9441917-67-270799782952357/AnsiballZ_stat.py'
Nov 29 06:26:10 compute-0 sudo[115556]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:26:10 compute-0 python3.9[115558]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:26:10 compute-0 sudo[115556]: pam_unix(sudo:session): session closed for user root
Nov 29 06:26:10 compute-0 ceph-mon[74654]: pgmap v410: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:26:10 compute-0 ceph-mon[74654]: pgmap v411: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:26:10 compute-0 sudo[115635]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kffshpwbkshdulrklggfionhfxkoclei ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397569.9441917-67-270799782952357/AnsiballZ_file.py'
Nov 29 06:26:10 compute-0 sudo[115635]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:26:11 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:26:11 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:26:11 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:26:11.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:26:11 compute-0 python3.9[115637]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/ceph-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/ceph-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:26:11 compute-0 sudo[115635]: pam_unix(sudo:session): session closed for user root
Nov 29 06:26:11 compute-0 sshd-session[115253]: Connection closed by 192.168.122.30 port 47582
Nov 29 06:26:11 compute-0 sshd-session[115250]: pam_unix(sshd:session): session closed for user zuul
Nov 29 06:26:11 compute-0 systemd[1]: session-39.scope: Deactivated successfully.
Nov 29 06:26:11 compute-0 systemd[1]: session-39.scope: Consumed 1.804s CPU time.
Nov 29 06:26:11 compute-0 systemd-logind[797]: Session 39 logged out. Waiting for processes to exit.
Nov 29 06:26:11 compute-0 systemd-logind[797]: Removed session 39.
Nov 29 06:26:12 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:26:12 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:26:12 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:26:12.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:26:12 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v413: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:26:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 06:26:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:26:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 06:26:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:26:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:26:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:26:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:26:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:26:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:26:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:26:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:26:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:26:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 06:26:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:26:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:26:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:26:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Nov 29 06:26:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:26:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 06:26:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:26:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:26:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:26:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 06:26:13 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:26:13 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:26:13 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:26:13.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:26:13 compute-0 ceph-mon[74654]: pgmap v412: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:26:14 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:26:14 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:26:14 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:26:14 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:26:14.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:26:14 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v414: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 10 B/s, 0 objects/s recovering
Nov 29 06:26:14 compute-0 ceph-mon[74654]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #18. Immutable memtables: 0.
Nov 29 06:26:14 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:26:14.657424) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 06:26:14 compute-0 ceph-mon[74654]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 18
Nov 29 06:26:14 compute-0 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764397574657535, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 8329, "num_deletes": 251, "total_data_size": 12098626, "memory_usage": 12307704, "flush_reason": "Manual Compaction"}
Nov 29 06:26:14 compute-0 ceph-mon[74654]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #19: started
Nov 29 06:26:15 compute-0 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764397575008751, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 19, "file_size": 10230000, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 145, "largest_seqno": 8465, "table_properties": {"data_size": 10196642, "index_size": 22363, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9925, "raw_key_size": 94444, "raw_average_key_size": 23, "raw_value_size": 10119073, "raw_average_value_size": 2558, "num_data_blocks": 978, "num_entries": 3955, "num_filter_entries": 3955, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764396967, "oldest_key_time": 1764396967, "file_creation_time": 1764397574, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cb6c8f8f-b3b4-4901-9b8e-6f9d7b0da908", "db_session_id": "VL4WOW4AK06DDHF5VQBP", "orig_file_number": 19, "seqno_to_time_mapping": "N/A"}}
Nov 29 06:26:15 compute-0 ceph-mon[74654]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 351616 microseconds, and 21586 cpu microseconds.
Nov 29 06:26:15 compute-0 ceph-mon[74654]: pgmap v413: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:26:15 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:26:15 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:26:15 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:26:15.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:26:15 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:26:15.009040) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #19: 10230000 bytes OK
Nov 29 06:26:15 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:26:15.009127) [db/memtable_list.cc:519] [default] Level-0 commit table #19 started
Nov 29 06:26:15 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:26:15.324327) [db/memtable_list.cc:722] [default] Level-0 commit table #19: memtable #1 done
Nov 29 06:26:15 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:26:15.324385) EVENT_LOG_v1 {"time_micros": 1764397575324375, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [3, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Nov 29 06:26:15 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:26:15.324412) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[3 0 0 0 0 0 0] max score 0.75
Nov 29 06:26:15 compute-0 ceph-mon[74654]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 12060700, prev total WAL file size 12061854, number of live WAL files 2.
Nov 29 06:26:15 compute-0 ceph-mon[74654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000014.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 06:26:15 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:26:15.327088) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730030' seq:72057594037927935, type:22 .. '7061786F7300323532' seq:0, type:0; will stop at (end)
Nov 29 06:26:15 compute-0 ceph-mon[74654]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 3@0 files to L6, score -1.00
Nov 29 06:26:15 compute-0 ceph-mon[74654]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [19(9990KB) 13(53KB) 8(1944B)]
Nov 29 06:26:15 compute-0 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764397575327208, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [19, 13, 8], "score": -1, "input_data_size": 10286793, "oldest_snapshot_seqno": -1}
Nov 29 06:26:16 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:26:16 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:26:16 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:26:16.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:26:16 compute-0 ceph-mon[74654]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #20: 3767 keys, 10242256 bytes, temperature: kUnknown
Nov 29 06:26:16 compute-0 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764397576121445, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 20, "file_size": 10242256, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10209349, "index_size": 22365, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9477, "raw_key_size": 92349, "raw_average_key_size": 24, "raw_value_size": 10133496, "raw_average_value_size": 2690, "num_data_blocks": 982, "num_entries": 3767, "num_filter_entries": 3767, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764396963, "oldest_key_time": 0, "file_creation_time": 1764397575, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cb6c8f8f-b3b4-4901-9b8e-6f9d7b0da908", "db_session_id": "VL4WOW4AK06DDHF5VQBP", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Nov 29 06:26:16 compute-0 ceph-mon[74654]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 06:26:16 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v415: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 9 B/s, 0 objects/s recovering
Nov 29 06:26:16 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:26:16.121731) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 3@0 files to L6 => 10242256 bytes
Nov 29 06:26:16 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:26:16.207478) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 13.0 rd, 12.9 wr, level 6, files in(3, 0) out(1 +0 blob) MB in(9.8, 0.0 +0.0 blob) out(9.8 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 4059, records dropped: 292 output_compression: NoCompression
Nov 29 06:26:16 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:26:16.207551) EVENT_LOG_v1 {"time_micros": 1764397576207527, "job": 4, "event": "compaction_finished", "compaction_time_micros": 794327, "compaction_time_cpu_micros": 24489, "output_level": 6, "num_output_files": 1, "total_output_size": 10242256, "num_input_records": 4059, "num_output_records": 3767, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 06:26:16 compute-0 ceph-mon[74654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000019.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 06:26:16 compute-0 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764397576212695, "job": 4, "event": "table_file_deletion", "file_number": 19}
Nov 29 06:26:16 compute-0 ceph-mon[74654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000013.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 06:26:16 compute-0 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764397576213022, "job": 4, "event": "table_file_deletion", "file_number": 13}
Nov 29 06:26:16 compute-0 ceph-mon[74654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 06:26:16 compute-0 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764397576213112, "job": 4, "event": "table_file_deletion", "file_number": 8}
Nov 29 06:26:16 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:26:15.326947) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 06:26:17 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:26:17 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:26:17 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:26:17.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:26:17 compute-0 sshd-session[115667]: Accepted publickey for zuul from 192.168.122.30 port 53980 ssh2: ECDSA SHA256:q0RMlXdalxA6snNWza7TmIndlwLWLLpO+sXhiGKqO/I
Nov 29 06:26:17 compute-0 systemd-logind[797]: New session 40 of user zuul.
Nov 29 06:26:17 compute-0 systemd[1]: Started Session 40 of User zuul.
Nov 29 06:26:17 compute-0 sshd-session[115667]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 06:26:18 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:26:18 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:26:18 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:26:18.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:26:18 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v416: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 9 B/s, 0 objects/s recovering
Nov 29 06:26:18 compute-0 ceph-mon[74654]: pgmap v414: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 10 B/s, 0 objects/s recovering
Nov 29 06:26:19 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:26:19 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:26:19 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:26:19.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:26:19 compute-0 python3.9[115820]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 06:26:19 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:26:20 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:26:20 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:26:20 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:26:20.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:26:20 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v417: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:26:20 compute-0 sudo[115975]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oyzsgmswnjjhptlqmxuhoozbgsezevqq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397579.8799155-64-96451296051805/AnsiballZ_file.py'
Nov 29 06:26:20 compute-0 sudo[115975]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:26:20 compute-0 sudo[115978]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:26:20 compute-0 sudo[115978]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:26:20 compute-0 sudo[115978]: pam_unix(sudo:session): session closed for user root
Nov 29 06:26:20 compute-0 sudo[116003]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:26:20 compute-0 sudo[116003]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:26:20 compute-0 sudo[116003]: pam_unix(sudo:session): session closed for user root
Nov 29 06:26:20 compute-0 python3.9[115977]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:26:20 compute-0 sudo[115975]: pam_unix(sudo:session): session closed for user root
Nov 29 06:26:20 compute-0 ceph-mon[74654]: pgmap v415: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 9 B/s, 0 objects/s recovering
Nov 29 06:26:20 compute-0 ceph-mon[74654]: pgmap v416: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 9 B/s, 0 objects/s recovering
Nov 29 06:26:21 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:26:21 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:26:21 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:26:21.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:26:21 compute-0 sudo[116201]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yqtfmbbszahzgfoefxbtsgomwhvvvutp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397580.9884558-88-80307032825366/AnsiballZ_stat.py'
Nov 29 06:26:21 compute-0 sudo[116201]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:26:21 compute-0 python3.9[116203]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:26:21 compute-0 sudo[116201]: pam_unix(sudo:session): session closed for user root
Nov 29 06:26:22 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:26:22 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:26:22 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:26:22.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:26:22 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v418: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:26:22 compute-0 sudo[116279]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zygubztfzczxmgcecydupfbhvpmmiwog ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397580.9884558-88-80307032825366/AnsiballZ_file.py'
Nov 29 06:26:22 compute-0 sudo[116279]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:26:22 compute-0 ceph-mon[74654]: pgmap v417: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:26:22 compute-0 python3.9[116281]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.06e0gsw3 recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:26:22 compute-0 sudo[116279]: pam_unix(sudo:session): session closed for user root
Nov 29 06:26:23 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:26:23 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:26:23 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:26:23.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:26:23 compute-0 sudo[116432]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ijsrnpfzrmcxmhmzwuagtkbjbwlcuwhs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397583.1922028-148-49895247653577/AnsiballZ_stat.py'
Nov 29 06:26:23 compute-0 sudo[116432]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:26:23 compute-0 python3.9[116434]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:26:23 compute-0 sudo[116432]: pam_unix(sudo:session): session closed for user root
Nov 29 06:26:24 compute-0 sudo[116510]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whleohxpkyvwtyhxnwoyzfzzspkrpayv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397583.1922028-148-49895247653577/AnsiballZ_file.py'
Nov 29 06:26:24 compute-0 sudo[116510]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:26:24 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:26:24 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:26:24 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:26:24.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:26:24 compute-0 ceph-mon[74654]: pgmap v418: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:26:24 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v419: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:26:24 compute-0 python3.9[116512]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.dg5z02zt recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:26:24 compute-0 sudo[116510]: pam_unix(sudo:session): session closed for user root
Nov 29 06:26:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:26:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:26:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:26:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:26:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:26:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:26:24 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:26:24 compute-0 sudo[116663]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwrirsabcfappwbatryphrkwdfazygra ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397584.5123696-187-208796041022521/AnsiballZ_file.py'
Nov 29 06:26:24 compute-0 sudo[116663]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:26:25 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:26:25 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:26:25 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:26:25.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:26:25 compute-0 python3.9[116665]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 06:26:25 compute-0 sudo[116663]: pam_unix(sudo:session): session closed for user root
Nov 29 06:26:25 compute-0 sudo[116815]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uhvzuhumgmkzlnvfwzkmbswvhrcoksuw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397585.491759-211-113951144188570/AnsiballZ_stat.py'
Nov 29 06:26:25 compute-0 sudo[116815]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:26:26 compute-0 python3.9[116817]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:26:26 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:26:26 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:26:26 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:26:26.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:26:26 compute-0 sudo[116815]: pam_unix(sudo:session): session closed for user root
Nov 29 06:26:26 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v420: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:26:26 compute-0 sudo[116893]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mzjslfhtyvuwfpehicuoyvpjhtwlwcsn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397585.491759-211-113951144188570/AnsiballZ_file.py'
Nov 29 06:26:26 compute-0 sudo[116893]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:26:26 compute-0 python3.9[116895]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 06:26:26 compute-0 sudo[116893]: pam_unix(sudo:session): session closed for user root
Nov 29 06:26:27 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:26:27 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:26:27 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:26:27.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:26:27 compute-0 sudo[117046]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-phmirnaullcwztvhtyeebgrlmvzkqgku ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397586.7638042-211-124701603329207/AnsiballZ_stat.py'
Nov 29 06:26:27 compute-0 sudo[117046]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:26:27 compute-0 python3.9[117048]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:26:27 compute-0 sudo[117046]: pam_unix(sudo:session): session closed for user root
Nov 29 06:26:27 compute-0 sudo[117124]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-llncgnkkikonqyjjfxwuwohxxcuaazlt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397586.7638042-211-124701603329207/AnsiballZ_file.py'
Nov 29 06:26:27 compute-0 sudo[117124]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:26:27 compute-0 python3.9[117126]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 06:26:27 compute-0 sudo[117124]: pam_unix(sudo:session): session closed for user root
Nov 29 06:26:28 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:26:28 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:26:28 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:26:28.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:26:28 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v421: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:26:28 compute-0 sudo[117280]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjkzyeplsdopvvxtjhlzoxmvkjapaman ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397588.0854964-280-86557737969072/AnsiballZ_file.py'
Nov 29 06:26:28 compute-0 sudo[117280]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:26:28 compute-0 python3.9[117282]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:26:28 compute-0 sudo[117280]: pam_unix(sudo:session): session closed for user root
Nov 29 06:26:28 compute-0 sshd-session[117127]: Invalid user kingbase from 104.208.108.166 port 52018
Nov 29 06:26:28 compute-0 sshd-session[117228]: Invalid user smart from 138.124.186.225 port 33784
Nov 29 06:26:28 compute-0 sshd-session[117228]: Received disconnect from 138.124.186.225 port 33784:11: Bye Bye [preauth]
Nov 29 06:26:28 compute-0 sshd-session[117228]: Disconnected from invalid user smart 138.124.186.225 port 33784 [preauth]
Nov 29 06:26:29 compute-0 sshd-session[117127]: Received disconnect from 104.208.108.166 port 52018:11: Bye Bye [preauth]
Nov 29 06:26:29 compute-0 sshd-session[117127]: Disconnected from invalid user kingbase 104.208.108.166 port 52018 [preauth]
Nov 29 06:26:29 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:26:29 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:26:29 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:26:29.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:26:29 compute-0 sudo[117433]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pyzahjtszbjvcfxdiznsjwudxlsuyrzu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397588.8847022-304-87228776078547/AnsiballZ_stat.py'
Nov 29 06:26:29 compute-0 sudo[117433]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:26:29 compute-0 python3.9[117435]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:26:29 compute-0 sudo[117433]: pam_unix(sudo:session): session closed for user root
Nov 29 06:26:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 06:26:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 06:26:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 06:26:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 06:26:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 06:26:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 06:26:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 06:26:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 06:26:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 06:26:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 06:26:29 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:26:29 compute-0 sudo[117511]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nqcxtlozwtgiuikvwkkfpryoenvlwbdo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397588.8847022-304-87228776078547/AnsiballZ_file.py'
Nov 29 06:26:29 compute-0 sudo[117511]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:26:29 compute-0 python3.9[117513]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:26:30 compute-0 sudo[117511]: pam_unix(sudo:session): session closed for user root
Nov 29 06:26:30 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:26:30 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:26:30 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:26:30.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:26:30 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v422: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:26:30 compute-0 sudo[117665]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xvsqayigbtzqgxupriyeigfsbzxagaar ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397590.2914233-340-61213198490324/AnsiballZ_stat.py'
Nov 29 06:26:30 compute-0 sudo[117665]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:26:30 compute-0 ceph-mon[74654]: pgmap v419: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:26:30 compute-0 python3.9[117667]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:26:30 compute-0 sudo[117665]: pam_unix(sudo:session): session closed for user root
Nov 29 06:26:31 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:26:31 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:26:31 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:26:31.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:26:31 compute-0 sudo[117744]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ailpqluatvgaocubidxmvpjfaglybbit ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397590.2914233-340-61213198490324/AnsiballZ_file.py'
Nov 29 06:26:31 compute-0 sudo[117744]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:26:31 compute-0 python3.9[117746]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:26:31 compute-0 sudo[117744]: pam_unix(sudo:session): session closed for user root
Nov 29 06:26:31 compute-0 sshd-session[117514]: Received disconnect from 103.147.159.91 port 53086:11: Bye Bye [preauth]
Nov 29 06:26:31 compute-0 sshd-session[117514]: Disconnected from authenticating user root 103.147.159.91 port 53086 [preauth]
Nov 29 06:26:32 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:26:32 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:26:32 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:26:32.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:26:32 compute-0 sudo[117896]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-piyioeaxwspqvorrdeahtmrqgmtuhljz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397591.477046-376-17283126707299/AnsiballZ_systemd.py'
Nov 29 06:26:32 compute-0 sudo[117896]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:26:32 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v423: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:26:32 compute-0 python3.9[117898]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 06:26:32 compute-0 systemd[1]: Reloading.
Nov 29 06:26:32 compute-0 systemd-rc-local-generator[117921]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 06:26:32 compute-0 systemd-sysv-generator[117927]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 06:26:32 compute-0 ceph-mon[74654]: pgmap v420: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:26:32 compute-0 ceph-mon[74654]: pgmap v421: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:26:32 compute-0 ceph-mon[74654]: pgmap v422: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:26:32 compute-0 sudo[117896]: pam_unix(sudo:session): session closed for user root
Nov 29 06:26:33 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:26:33 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:26:33 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:26:33.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:26:33 compute-0 sudo[118086]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zjdhwsunlgwcneotbombfxvxdstayahr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397593.1855655-400-189477373138798/AnsiballZ_stat.py'
Nov 29 06:26:33 compute-0 sudo[118086]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:26:33 compute-0 python3.9[118088]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:26:33 compute-0 sudo[118086]: pam_unix(sudo:session): session closed for user root
Nov 29 06:26:34 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:26:34 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:26:34 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:26:34.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:26:34 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v424: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:26:34 compute-0 sudo[118164]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rbxhkbfumcszcqxrdadvrvdwzffnxlzs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397593.1855655-400-189477373138798/AnsiballZ_file.py'
Nov 29 06:26:34 compute-0 sudo[118164]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:26:34 compute-0 python3.9[118166]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:26:34 compute-0 sudo[118164]: pam_unix(sudo:session): session closed for user root
Nov 29 06:26:34 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:26:35 compute-0 sudo[118317]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yolfohbpdeyruriontgonofmbozvsjuz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397594.6755776-436-278075350577877/AnsiballZ_stat.py'
Nov 29 06:26:35 compute-0 sudo[118317]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:26:35 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:26:35 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:26:35 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:26:35.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:26:35 compute-0 python3.9[118319]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:26:35 compute-0 sudo[118317]: pam_unix(sudo:session): session closed for user root
Nov 29 06:26:35 compute-0 sudo[118395]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-taaslysvcrtmkeisvqltprgnsuywmool ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397594.6755776-436-278075350577877/AnsiballZ_file.py'
Nov 29 06:26:35 compute-0 sudo[118395]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:26:35 compute-0 python3.9[118397]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:26:35 compute-0 sudo[118395]: pam_unix(sudo:session): session closed for user root
Nov 29 06:26:36 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:26:36 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:26:36 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:26:36.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:26:36 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v425: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:26:36 compute-0 sudo[118547]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jzkgrhvwbwrtbucegocyowqqhslwuppk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397595.9267197-472-131016372015187/AnsiballZ_systemd.py'
Nov 29 06:26:36 compute-0 sudo[118547]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:26:36 compute-0 python3.9[118549]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 06:26:36 compute-0 systemd[1]: Reloading.
Nov 29 06:26:36 compute-0 ceph-mon[74654]: pgmap v423: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:26:36 compute-0 systemd-rc-local-generator[118579]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 06:26:36 compute-0 systemd-sysv-generator[118583]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 06:26:36 compute-0 systemd[1]: Starting Create netns directory...
Nov 29 06:26:36 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 29 06:26:36 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 29 06:26:36 compute-0 systemd[1]: Finished Create netns directory.
Nov 29 06:26:36 compute-0 sudo[118547]: pam_unix(sudo:session): session closed for user root
Nov 29 06:26:37 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:26:37 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:26:37 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:26:37.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:26:37 compute-0 ceph-mon[74654]: pgmap v424: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:26:37 compute-0 ceph-mon[74654]: pgmap v425: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:26:37 compute-0 python3.9[118741]: ansible-ansible.builtin.service_facts Invoked
Nov 29 06:26:37 compute-0 network[118758]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 29 06:26:37 compute-0 network[118759]: 'network-scripts' will be removed from distribution in near future.
Nov 29 06:26:37 compute-0 network[118760]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 29 06:26:38 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:26:38 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:26:38 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:26:38.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:26:38 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v426: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:26:39 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:26:39 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:26:39 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:26:39.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:26:39 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:26:39 compute-0 ceph-mon[74654]: pgmap v426: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:26:40 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:26:40 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:26:40 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:26:40.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:26:40 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v427: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:26:40 compute-0 sudo[118820]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:26:40 compute-0 sudo[118820]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:26:40 compute-0 sudo[118820]: pam_unix(sudo:session): session closed for user root
Nov 29 06:26:40 compute-0 sudo[118845]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:26:40 compute-0 sudo[118845]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:26:40 compute-0 sudo[118845]: pam_unix(sudo:session): session closed for user root
Nov 29 06:26:41 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:26:41 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:26:41 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:26:41.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:26:42 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:26:42 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:26:42 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:26:42.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:26:42 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v428: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:26:43 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:26:43 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:26:43 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:26:43.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:26:44 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:26:44 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:26:44 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:26:44.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:26:44 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v429: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:26:44 compute-0 sudo[119073]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zqusgwvgpiiscqgskltnyysumitiorbk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397604.1369426-550-51436259002182/AnsiballZ_stat.py'
Nov 29 06:26:44 compute-0 sudo[119073]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:26:44 compute-0 python3.9[119075]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:26:44 compute-0 sudo[119073]: pam_unix(sudo:session): session closed for user root
Nov 29 06:26:44 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:26:45 compute-0 sudo[119152]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-goixuammplqcjsvtnzzcirvzgrjayhkt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397604.1369426-550-51436259002182/AnsiballZ_file.py'
Nov 29 06:26:45 compute-0 sudo[119152]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:26:45 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:26:45 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:26:45 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:26:45.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:26:45 compute-0 python3.9[119154]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:26:45 compute-0 sudo[119152]: pam_unix(sudo:session): session closed for user root
Nov 29 06:26:46 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:26:46 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:26:46 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:26:46.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:26:46 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v430: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:26:47 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:26:47 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:26:47 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:26:47.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:26:48 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:26:48 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:26:48 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:26:48.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:26:48 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v431: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:26:48 compute-0 sshd-session[119180]: Invalid user tempuser from 79.116.35.29 port 46182
Nov 29 06:26:48 compute-0 sshd-session[119180]: Received disconnect from 79.116.35.29 port 46182:11: Bye Bye [preauth]
Nov 29 06:26:48 compute-0 sshd-session[119180]: Disconnected from invalid user tempuser 79.116.35.29 port 46182 [preauth]
Nov 29 06:26:49 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:26:49 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:26:49 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:26:49.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:26:49 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:26:50 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:26:50 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:26:50 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:26:50.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:26:50 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v432: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:26:50 compute-0 sudo[119308]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ufrcgvxszuneaymolqtvrqijnytjvdag ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397610.160618-589-117873176261679/AnsiballZ_file.py'
Nov 29 06:26:50 compute-0 sudo[119308]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:26:50 compute-0 python3.9[119310]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:26:50 compute-0 sudo[119308]: pam_unix(sudo:session): session closed for user root
Nov 29 06:26:51 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:26:51 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:26:51 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:26:51.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:26:51 compute-0 sudo[119461]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-muhrwkpxbuiqixtyqviisejrelkmtilk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397610.979324-613-15081514424503/AnsiballZ_stat.py'
Nov 29 06:26:51 compute-0 sudo[119461]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:26:51 compute-0 python3.9[119463]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:26:51 compute-0 sudo[119461]: pam_unix(sudo:session): session closed for user root
Nov 29 06:26:51 compute-0 sudo[119539]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-acmklkwtexgewbuartqgoudvzlkelgey ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397610.979324-613-15081514424503/AnsiballZ_file.py'
Nov 29 06:26:51 compute-0 sudo[119539]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:26:51 compute-0 ceph-mon[74654]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Nov 29 06:26:51 compute-0 ceph-mon[74654]: paxos.0).electionLogic(15) init, last seen epoch 15, mid-election, bumping
Nov 29 06:26:52 compute-0 python3.9[119541]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/edpm-config/firewall/sshd-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/sshd-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:26:52 compute-0 sudo[119539]: pam_unix(sudo:session): session closed for user root
Nov 29 06:26:52 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:26:52 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:26:52 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:26:52.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:26:52 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v433: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:26:52 compute-0 ceph-mon[74654]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 06:26:52 compute-0 ceph-mon[74654]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Nov 29 06:26:52 compute-0 ceph-mon[74654]: pgmap v427: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:26:52 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Nov 29 06:26:52 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 06:26:52 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.gxdwyy=up:active} 2 up:standby
Nov 29 06:26:52 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e139: 3 total, 3 up, 3 in
Nov 29 06:26:52 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : mgrmap e10: compute-0.vxabpq(active, since 9m), standbys: compute-2.ngsyhe, compute-1.gaxpay
Nov 29 06:26:52 compute-0 ceph-mon[74654]: log_channel(cluster) log [INF] : overall HEALTH_OK
Nov 29 06:26:53 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:26:53 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:26:53 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:26:53.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:26:53 compute-0 sudo[119692]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bomkmakduucziuczoykvlcagwlwbbshq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397612.6810813-658-6571107884559/AnsiballZ_timezone.py'
Nov 29 06:26:53 compute-0 sudo[119692]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:26:53 compute-0 python3.9[119694]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Nov 29 06:26:53 compute-0 systemd[1]: Starting Time & Date Service...
Nov 29 06:26:53 compute-0 systemd[1]: Started Time & Date Service.
Nov 29 06:26:53 compute-0 sudo[119692]: pam_unix(sudo:session): session closed for user root
Nov 29 06:26:54 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:26:54 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:26:54 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:26:54.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:26:54 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v434: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:26:54 compute-0 ceph-mgr[74948]: [balancer INFO root] Optimize plan auto_2025-11-29_06:26:54
Nov 29 06:26:54 compute-0 ceph-mgr[74948]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 06:26:54 compute-0 ceph-mgr[74948]: [balancer INFO root] do_upmap
Nov 29 06:26:54 compute-0 ceph-mgr[74948]: [balancer INFO root] pools ['volumes', '.rgw.root', 'default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'backups', 'vms', 'default.rgw.log', '.mgr', 'images']
Nov 29 06:26:54 compute-0 ceph-mgr[74948]: [balancer INFO root] prepared 0/10 changes
Nov 29 06:26:54 compute-0 sudo[119848]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-chxcwrixzhgusxgudchvqlneicsplnnd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397613.9559765-685-188723777227963/AnsiballZ_file.py'
Nov 29 06:26:54 compute-0 sudo[119848]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:26:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:26:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:26:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:26:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:26:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:26:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:26:54 compute-0 python3.9[119850]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:26:54 compute-0 sudo[119848]: pam_unix(sudo:session): session closed for user root
Nov 29 06:26:54 compute-0 sudo[119877]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:26:54 compute-0 sudo[119877]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:26:54 compute-0 sudo[119877]: pam_unix(sudo:session): session closed for user root
Nov 29 06:26:54 compute-0 sudo[119902]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:26:54 compute-0 sudo[119902]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:26:54 compute-0 sudo[119902]: pam_unix(sudo:session): session closed for user root
Nov 29 06:26:54 compute-0 sudo[119927]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:26:54 compute-0 sudo[119927]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:26:54 compute-0 sudo[119927]: pam_unix(sudo:session): session closed for user root
Nov 29 06:26:54 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:26:54 compute-0 sudo[119981]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 29 06:26:54 compute-0 sudo[119981]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:26:54 compute-0 ceph-mon[74654]: pgmap v428: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:26:54 compute-0 ceph-mon[74654]: pgmap v429: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:26:54 compute-0 ceph-mon[74654]: pgmap v430: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:26:54 compute-0 ceph-mon[74654]: pgmap v431: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:26:54 compute-0 ceph-mon[74654]: pgmap v432: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:26:54 compute-0 ceph-mon[74654]: mon.compute-1 calling monitor election
Nov 29 06:26:54 compute-0 ceph-mon[74654]: mon.compute-0 calling monitor election
Nov 29 06:26:54 compute-0 ceph-mon[74654]: mon.compute-2 calling monitor election
Nov 29 06:26:54 compute-0 ceph-mon[74654]: pgmap v433: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:26:54 compute-0 ceph-mon[74654]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Nov 29 06:26:54 compute-0 ceph-mon[74654]: monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Nov 29 06:26:54 compute-0 ceph-mon[74654]: fsmap cephfs:1 {0=cephfs.compute-2.gxdwyy=up:active} 2 up:standby
Nov 29 06:26:54 compute-0 ceph-mon[74654]: osdmap e139: 3 total, 3 up, 3 in
Nov 29 06:26:54 compute-0 ceph-mon[74654]: mgrmap e10: compute-0.vxabpq(active, since 9m), standbys: compute-2.ngsyhe, compute-1.gaxpay
Nov 29 06:26:54 compute-0 ceph-mon[74654]: overall HEALTH_OK
Nov 29 06:26:55 compute-0 sudo[120122]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qceawbxbzebeumhncyiiacppydldkxgv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397614.7520876-709-46072602866284/AnsiballZ_stat.py'
Nov 29 06:26:55 compute-0 sudo[120122]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:26:55 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:26:55 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:26:55 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:26:55.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:26:55 compute-0 python3.9[120128]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:26:55 compute-0 sudo[120122]: pam_unix(sudo:session): session closed for user root
Nov 29 06:26:55 compute-0 sshd-session[119851]: Invalid user bitwarden from 176.109.67.96 port 54410
Nov 29 06:26:55 compute-0 sshd-session[119851]: Received disconnect from 176.109.67.96 port 54410:11: Bye Bye [preauth]
Nov 29 06:26:55 compute-0 sshd-session[119851]: Disconnected from invalid user bitwarden 176.109.67.96 port 54410 [preauth]
Nov 29 06:26:55 compute-0 sudo[120262]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tbcwwhyyjdhshhkwbrndldymgdjvxwzq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397614.7520876-709-46072602866284/AnsiballZ_file.py'
Nov 29 06:26:55 compute-0 sudo[120262]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:26:55 compute-0 podman[120175]: 2025-11-29 06:26:55.629240098 +0000 UTC m=+0.384062012 container exec c3c8680245c67f710ba1b448e2d4c77c4c02bc368d31276f0332ad942957e3cf (image=quay.io/ceph/ceph:v18, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 29 06:26:55 compute-0 python3.9[120264]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:26:55 compute-0 sudo[120262]: pam_unix(sudo:session): session closed for user root
Nov 29 06:26:55 compute-0 podman[120175]: 2025-11-29 06:26:55.832021462 +0000 UTC m=+0.586843406 container exec_died c3c8680245c67f710ba1b448e2d4c77c4c02bc368d31276f0332ad942957e3cf (image=quay.io/ceph/ceph:v18, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mon-compute-0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 06:26:56 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:26:56 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:26:56 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:26:56.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:26:56 compute-0 ceph-mon[74654]: pgmap v434: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:26:56 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v435: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:26:56 compute-0 sudo[120511]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ahgvqeejmcpatzfmyzwdsfhohhwwmskm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397615.9678183-745-179993579467631/AnsiballZ_stat.py'
Nov 29 06:26:56 compute-0 sudo[120511]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:26:56 compute-0 python3.9[120522]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:26:56 compute-0 sudo[120511]: pam_unix(sudo:session): session closed for user root
Nov 29 06:26:56 compute-0 podman[120558]: 2025-11-29 06:26:56.428259643 +0000 UTC m=+0.073441724 container exec f5b8edcc79df1f136246f04a71d5e10f6a214865dd4162430c1b6090267d988f (image=quay.io/ceph/haproxy:2.3, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-haproxy-rgw-default-compute-0-zzbnoj)
Nov 29 06:26:56 compute-0 podman[120558]: 2025-11-29 06:26:56.435910359 +0000 UTC m=+0.081092430 container exec_died f5b8edcc79df1f136246f04a71d5e10f6a214865dd4162430c1b6090267d988f (image=quay.io/ceph/haproxy:2.3, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-haproxy-rgw-default-compute-0-zzbnoj)
Nov 29 06:26:56 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 06:26:56 compute-0 sudo[120710]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sgrkdvdfeixlwvmeycekfquajltgeayl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397615.9678183-745-179993579467631/AnsiballZ_file.py'
Nov 29 06:26:56 compute-0 sudo[120710]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:26:56 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 06:26:56 compute-0 podman[120682]: 2025-11-29 06:26:56.749271304 +0000 UTC m=+0.126243564 container exec c5da9d8380f0eb7ca78841b66eaacc1789ab9c8fb67eaab27657426fdf51bade (image=quay.io/ceph/keepalived:2.2.4, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-keepalived-rgw-default-compute-0-uyqrbs, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64, name=keepalived, release=1793, io.k8s.display-name=Keepalived on RHEL 9, build-date=2023-02-22T09:23:20, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=keepalived for Ceph, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.buildah.version=1.28.2, io.openshift.tags=Ceph keepalived, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, com.redhat.component=keepalived-container, vcs-type=git, version=2.2.4, distribution-scope=public)
Nov 29 06:26:56 compute-0 python3.9[120714]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.p9tbr6va recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:26:56 compute-0 sudo[120710]: pam_unix(sudo:session): session closed for user root
Nov 29 06:26:56 compute-0 podman[120722]: 2025-11-29 06:26:56.88407343 +0000 UTC m=+0.112436015 container exec_died c5da9d8380f0eb7ca78841b66eaacc1789ab9c8fb67eaab27657426fdf51bade (image=quay.io/ceph/keepalived:2.2.4, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-keepalived-rgw-default-compute-0-uyqrbs, vcs-type=git, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, version=2.2.4, description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64, build-date=2023-02-22T09:23:20, distribution-scope=public, io.openshift.tags=Ceph keepalived, name=keepalived, release=1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Nov 29 06:26:57 compute-0 podman[120682]: 2025-11-29 06:26:57.043620093 +0000 UTC m=+0.420592283 container exec_died c5da9d8380f0eb7ca78841b66eaacc1789ab9c8fb67eaab27657426fdf51bade (image=quay.io/ceph/keepalived:2.2.4, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-keepalived-rgw-default-compute-0-uyqrbs, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, architecture=x86_64, com.redhat.component=keepalived-container, description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.buildah.version=1.28.2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, distribution-scope=public, release=1793, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., version=2.2.4, build-date=2023-02-22T09:23:20)
Nov 29 06:26:57 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:26:57 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:26:57 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:26:57.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:26:57 compute-0 sudo[119981]: pam_unix(sudo:session): session closed for user root
Nov 29 06:26:57 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 06:26:57 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:26:57 compute-0 sudo[120885]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-egynypxtpnvgjeanjmwzmwhmjfemouzu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397617.1405265-781-206866941693031/AnsiballZ_stat.py'
Nov 29 06:26:57 compute-0 sudo[120885]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:26:57 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 06:26:57 compute-0 python3.9[120887]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:26:57 compute-0 sudo[120885]: pam_unix(sudo:session): session closed for user root
Nov 29 06:26:58 compute-0 sudo[120963]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sfidwechjbxvzlmcdqjlwvyfpnfdhduw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397617.1405265-781-206866941693031/AnsiballZ_file.py'
Nov 29 06:26:58 compute-0 sudo[120963]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:26:58 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:26:58 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:26:58 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:26:58.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:26:58 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v436: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:26:58 compute-0 python3.9[120965]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:26:58 compute-0 sudo[120963]: pam_unix(sudo:session): session closed for user root
Nov 29 06:26:58 compute-0 ceph-mon[74654]: pgmap v435: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:26:58 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:26:58 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 06:26:59 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:26:59 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:26:59 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:26:59.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:26:59 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:26:59 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 06:26:59 compute-0 sudo[121116]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qzeavjqzidvibfjehxymlwdcvhxxhenh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397618.6624246-820-100973651340177/AnsiballZ_command.py'
Nov 29 06:26:59 compute-0 sudo[121116]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:26:59 compute-0 python3.9[121118]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:26:59 compute-0 sudo[121116]: pam_unix(sudo:session): session closed for user root
Nov 29 06:26:59 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:26:59 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:27:00 compute-0 sudo[121269]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jlbbwmlvpajsjgikpvapuunkgtouqnxn ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764397619.54906-844-183045914281469/AnsiballZ_edpm_nftables_from_files.py'
Nov 29 06:27:00 compute-0 sudo[121269]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:27:00 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:27:00 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:27:00 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:27:00.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:27:00 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v437: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:27:00 compute-0 python3[121271]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 29 06:27:00 compute-0 sudo[121269]: pam_unix(sudo:session): session closed for user root
Nov 29 06:27:00 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:27:00 compute-0 sudo[121421]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-taydhslsnvbxndrplythhzhycbpckvbh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397620.4899821-868-61569396734437/AnsiballZ_stat.py'
Nov 29 06:27:00 compute-0 sudo[121421]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:27:00 compute-0 sudo[121425]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:27:00 compute-0 sudo[121425]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:27:00 compute-0 sudo[121425]: pam_unix(sudo:session): session closed for user root
Nov 29 06:27:01 compute-0 sudo[121450]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:27:01 compute-0 sudo[121450]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:27:01 compute-0 sudo[121450]: pam_unix(sudo:session): session closed for user root
Nov 29 06:27:01 compute-0 python3.9[121423]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:27:01 compute-0 sudo[121421]: pam_unix(sudo:session): session closed for user root
Nov 29 06:27:01 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:27:01 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:27:01 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:27:01.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:27:01 compute-0 sudo[121550]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-doneoyhhfhtkhkfyhkfgxgovgzdzundx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397620.4899821-868-61569396734437/AnsiballZ_file.py'
Nov 29 06:27:01 compute-0 sudo[121550]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:27:01 compute-0 python3.9[121552]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:27:01 compute-0 sudo[121550]: pam_unix(sudo:session): session closed for user root
Nov 29 06:27:01 compute-0 sshd-session[121554]: Invalid user minecraft from 162.214.92.14 port 35778
Nov 29 06:27:01 compute-0 sshd-session[121554]: Received disconnect from 162.214.92.14 port 35778:11: Bye Bye [preauth]
Nov 29 06:27:01 compute-0 sshd-session[121554]: Disconnected from invalid user minecraft 162.214.92.14 port 35778 [preauth]
Nov 29 06:27:02 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:27:02 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:27:02 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:27:02.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:27:02 compute-0 sudo[121704]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tusgxflfcrofhntmaqkwkaflhyesnqkw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397621.7591438-904-253396148196034/AnsiballZ_stat.py'
Nov 29 06:27:02 compute-0 sudo[121704]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:27:02 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v438: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:27:02 compute-0 python3.9[121706]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:27:02 compute-0 sudo[121704]: pam_unix(sudo:session): session closed for user root
Nov 29 06:27:02 compute-0 sudo[121782]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eqssgbmrcqecabpzmmxxgbleufiyeepz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397621.7591438-904-253396148196034/AnsiballZ_file.py'
Nov 29 06:27:02 compute-0 sudo[121782]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:27:03 compute-0 python3.9[121784]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:27:03 compute-0 sudo[121782]: pam_unix(sudo:session): session closed for user root
Nov 29 06:27:03 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:27:03 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:27:03 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:27:03.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:27:03 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:27:03 compute-0 ceph-mon[74654]: pgmap v436: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:27:03 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:27:03 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:27:03 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:27:03 compute-0 sudo[121935]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-putfxrntduhsremufvcewknsyljgurfe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397623.246949-940-45049472023963/AnsiballZ_stat.py'
Nov 29 06:27:03 compute-0 sudo[121935]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:27:03 compute-0 python3.9[121937]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:27:03 compute-0 sudo[121935]: pam_unix(sudo:session): session closed for user root
Nov 29 06:27:03 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:27:04 compute-0 sudo[121963]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:27:04 compute-0 sudo[121963]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:27:04 compute-0 sudo[121963]: pam_unix(sudo:session): session closed for user root
Nov 29 06:27:04 compute-0 sudo[122012]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:27:04 compute-0 sudo[122012]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:27:04 compute-0 sudo[122012]: pam_unix(sudo:session): session closed for user root
Nov 29 06:27:04 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:27:04 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:27:04 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:27:04.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:27:04 compute-0 sudo[122062]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-emhwxiqxiqtrvbqyaqzkmldtkmfmwskl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397623.246949-940-45049472023963/AnsiballZ_file.py'
Nov 29 06:27:04 compute-0 sudo[122062]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:27:04 compute-0 sudo[122065]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:27:04 compute-0 sudo[122065]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:27:04 compute-0 sudo[122065]: pam_unix(sudo:session): session closed for user root
Nov 29 06:27:04 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v439: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:27:04 compute-0 sudo[122091]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 06:27:04 compute-0 sudo[122091]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:27:04 compute-0 python3.9[122066]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:27:04 compute-0 sudo[122062]: pam_unix(sudo:session): session closed for user root
Nov 29 06:27:04 compute-0 sudo[122091]: pam_unix(sudo:session): session closed for user root
Nov 29 06:27:04 compute-0 sudo[122297]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kckuxscjzwlgftfrkawnnyvcxmaeboci ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397624.5589895-976-179728753850319/AnsiballZ_stat.py'
Nov 29 06:27:04 compute-0 sudo[122297]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:27:05 compute-0 python3.9[122299]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:27:05 compute-0 sudo[122297]: pam_unix(sudo:session): session closed for user root
Nov 29 06:27:05 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:27:05 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:27:05 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:27:05.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:27:05 compute-0 sudo[122375]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dqkexwitshgoaysgazvxoodiiuximwjo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397624.5589895-976-179728753850319/AnsiballZ_file.py'
Nov 29 06:27:05 compute-0 sudo[122375]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:27:05 compute-0 python3.9[122377]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:27:05 compute-0 sudo[122375]: pam_unix(sudo:session): session closed for user root
Nov 29 06:27:05 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:27:05 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 06:27:05 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:27:05 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 06:27:05 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 06:27:05 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 06:27:06 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:27:06 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:27:06 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:27:06.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:27:06 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v440: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:27:06 compute-0 sudo[122527]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjohofpiwinodctxaxepirrlrpgmgxmc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397626.1479547-1012-67021223575614/AnsiballZ_stat.py'
Nov 29 06:27:06 compute-0 sudo[122527]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:27:07 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:27:07 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:27:07 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:27:07.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:27:07 compute-0 sshd-session[122530]: Invalid user git from 118.193.39.127 port 41432
Nov 29 06:27:08 compute-0 sshd-session[122530]: Received disconnect from 118.193.39.127 port 41432:11: Bye Bye [preauth]
Nov 29 06:27:08 compute-0 sshd-session[122530]: Disconnected from invalid user git 118.193.39.127 port 41432 [preauth]
Nov 29 06:27:08 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:27:08 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:27:08 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:27:08.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:27:08 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v441: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:27:08 compute-0 python3.9[122529]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:27:08 compute-0 sudo[122527]: pam_unix(sudo:session): session closed for user root
Nov 29 06:27:08 compute-0 sudo[122608]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fnqbhqevxzytimhccsghwkjfgettgvbq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397626.1479547-1012-67021223575614/AnsiballZ_file.py'
Nov 29 06:27:08 compute-0 sudo[122608]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:27:08 compute-0 python3.9[122610]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:27:08 compute-0 sudo[122608]: pam_unix(sudo:session): session closed for user root
Nov 29 06:27:09 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:27:09 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:27:09 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:27:09.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:27:09 compute-0 sudo[122761]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ybysmbwasdhuewbltfyhlukhpdrjrrls ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397629.1964686-1051-82515760430707/AnsiballZ_command.py'
Nov 29 06:27:09 compute-0 sudo[122761]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:27:09 compute-0 python3.9[122763]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:27:09 compute-0 sudo[122761]: pam_unix(sudo:session): session closed for user root
Nov 29 06:27:10 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:27:10 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:27:10 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:27:10.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:27:10 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v442: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:27:10 compute-0 sudo[122916]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zyajzxiwxiqnjkquujjgkjrfdbuuflmq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397630.0511181-1075-215634954133366/AnsiballZ_blockinfile.py'
Nov 29 06:27:10 compute-0 sudo[122916]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:27:10 compute-0 python3.9[122918]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:27:10 compute-0 sudo[122916]: pam_unix(sudo:session): session closed for user root
Nov 29 06:27:10 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:27:11 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:27:11 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:27:11 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:27:11.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:27:11 compute-0 sudo[123069]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhhddwzqneqnrszgrhrbkvwjimkurxop ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397631.0580487-1102-158853205090054/AnsiballZ_file.py'
Nov 29 06:27:11 compute-0 sudo[123069]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:27:11 compute-0 python3.9[123071]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:27:11 compute-0 sudo[123069]: pam_unix(sudo:session): session closed for user root
Nov 29 06:27:12 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:27:12 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:27:12 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:27:12.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:27:12 compute-0 ceph-mon[74654]: pgmap v437: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:27:12 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:27:12 compute-0 ceph-mon[74654]: pgmap v438: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:27:12 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:27:12 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v443: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:27:12 compute-0 sudo[123221]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mxramhjurkmbwhsavfahmeidchxbzsch ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397631.849305-1102-227368181527973/AnsiballZ_file.py'
Nov 29 06:27:12 compute-0 sudo[123221]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:27:12 compute-0 python3.9[123223]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:27:12 compute-0 sudo[123221]: pam_unix(sudo:session): session closed for user root
Nov 29 06:27:12 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:27:12 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev 72bf1e0e-faac-4bd8-936b-e080b9ed62a7 does not exist
Nov 29 06:27:12 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev b450179a-7254-47e2-b310-cb17131ba156 does not exist
Nov 29 06:27:12 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev 7f448bd4-3e36-4ca4-b4db-39ed18a8ec9a does not exist
Nov 29 06:27:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 06:27:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:27:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 06:27:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:27:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:27:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:27:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:27:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:27:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:27:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:27:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:27:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:27:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 06:27:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:27:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:27:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:27:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Nov 29 06:27:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:27:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 06:27:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:27:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:27:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:27:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 06:27:13 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:27:13 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:27:13 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:27:13.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:27:13 compute-0 sudo[123374]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pgfzxwijadxdtmidcbrzaemguiadhpiy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397632.6282756-1147-160410703667824/AnsiballZ_mount.py'
Nov 29 06:27:13 compute-0 sudo[123374]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:27:13 compute-0 python3.9[123376]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 29 06:27:13 compute-0 sudo[123374]: pam_unix(sudo:session): session closed for user root
Nov 29 06:27:13 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 06:27:13 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 06:27:13 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 06:27:13 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 06:27:13 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 06:27:13 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:27:13 compute-0 sudo[123526]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-glnsawevozqxyhjyyihkclbizqglhvux ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397633.5845714-1147-34433515311113/AnsiballZ_mount.py'
Nov 29 06:27:13 compute-0 sudo[123526]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:27:14 compute-0 sudo[123528]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:27:14 compute-0 sudo[123528]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:27:14 compute-0 sudo[123528]: pam_unix(sudo:session): session closed for user root
Nov 29 06:27:14 compute-0 sudo[123554]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:27:14 compute-0 sudo[123554]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:27:14 compute-0 sudo[123554]: pam_unix(sudo:session): session closed for user root
Nov 29 06:27:14 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:27:14 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:27:14 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:27:14.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:27:14 compute-0 sudo[123579]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:27:14 compute-0 sudo[123579]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:27:14 compute-0 sudo[123579]: pam_unix(sudo:session): session closed for user root
Nov 29 06:27:14 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v444: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:27:14 compute-0 sudo[123604]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Nov 29 06:27:14 compute-0 sudo[123604]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:27:14 compute-0 python3.9[123530]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 29 06:27:14 compute-0 sudo[123526]: pam_unix(sudo:session): session closed for user root
Nov 29 06:27:14 compute-0 ceph-mon[74654]: pgmap v439: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:27:14 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:27:14 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 06:27:14 compute-0 ceph-mon[74654]: pgmap v440: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:27:14 compute-0 ceph-mon[74654]: pgmap v441: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:27:14 compute-0 ceph-mon[74654]: pgmap v442: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:27:14 compute-0 ceph-mon[74654]: pgmap v443: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:27:14 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:27:14 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 06:27:14 compute-0 podman[123689]: 2025-11-29 06:27:14.60876624 +0000 UTC m=+0.043190313 container create d03167ded80e24be5138ae24b7fd13747892624003cc12510697c501180f9598 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_williams, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 06:27:14 compute-0 systemd[1]: Started libpod-conmon-d03167ded80e24be5138ae24b7fd13747892624003cc12510697c501180f9598.scope.
Nov 29 06:27:14 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:27:14 compute-0 podman[123689]: 2025-11-29 06:27:14.588915913 +0000 UTC m=+0.023340016 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:27:14 compute-0 podman[123689]: 2025-11-29 06:27:14.694942837 +0000 UTC m=+0.129366960 container init d03167ded80e24be5138ae24b7fd13747892624003cc12510697c501180f9598 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_williams, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default)
Nov 29 06:27:14 compute-0 podman[123689]: 2025-11-29 06:27:14.703195748 +0000 UTC m=+0.137619821 container start d03167ded80e24be5138ae24b7fd13747892624003cc12510697c501180f9598 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_williams, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 06:27:14 compute-0 podman[123689]: 2025-11-29 06:27:14.707028095 +0000 UTC m=+0.141452248 container attach d03167ded80e24be5138ae24b7fd13747892624003cc12510697c501180f9598 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_williams, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 29 06:27:14 compute-0 happy_williams[123705]: 167 167
Nov 29 06:27:14 compute-0 systemd[1]: libpod-d03167ded80e24be5138ae24b7fd13747892624003cc12510697c501180f9598.scope: Deactivated successfully.
Nov 29 06:27:14 compute-0 podman[123689]: 2025-11-29 06:27:14.709891056 +0000 UTC m=+0.144315129 container died d03167ded80e24be5138ae24b7fd13747892624003cc12510697c501180f9598 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_williams, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 06:27:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-b7168389f294baf05d30a68f9b029067e062d02a9184ad8a8e13c4d03f67d526-merged.mount: Deactivated successfully.
Nov 29 06:27:14 compute-0 podman[123689]: 2025-11-29 06:27:14.777642726 +0000 UTC m=+0.212066799 container remove d03167ded80e24be5138ae24b7fd13747892624003cc12510697c501180f9598 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_williams, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 29 06:27:14 compute-0 systemd[1]: libpod-conmon-d03167ded80e24be5138ae24b7fd13747892624003cc12510697c501180f9598.scope: Deactivated successfully.
Nov 29 06:27:14 compute-0 sshd-session[115670]: Connection closed by 192.168.122.30 port 53980
Nov 29 06:27:14 compute-0 sshd-session[115667]: pam_unix(sshd:session): session closed for user zuul
Nov 29 06:27:14 compute-0 systemd-logind[797]: Session 40 logged out. Waiting for processes to exit.
Nov 29 06:27:14 compute-0 systemd[1]: session-40.scope: Deactivated successfully.
Nov 29 06:27:14 compute-0 systemd[1]: session-40.scope: Consumed 33.425s CPU time.
Nov 29 06:27:14 compute-0 systemd-logind[797]: Removed session 40.
Nov 29 06:27:14 compute-0 podman[123729]: 2025-11-29 06:27:14.945906035 +0000 UTC m=+0.047542014 container create 0747636d192b2d529b27a855b0e927d8204b42da46b638236d6129bb2bf41b85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_benz, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 06:27:14 compute-0 systemd[1]: Started libpod-conmon-0747636d192b2d529b27a855b0e927d8204b42da46b638236d6129bb2bf41b85.scope.
Nov 29 06:27:15 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:27:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4c93023dcab7b162162a83dd4482644e6be42f8a49113398bfa4b7404160265/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 06:27:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4c93023dcab7b162162a83dd4482644e6be42f8a49113398bfa4b7404160265/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:27:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4c93023dcab7b162162a83dd4482644e6be42f8a49113398bfa4b7404160265/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:27:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4c93023dcab7b162162a83dd4482644e6be42f8a49113398bfa4b7404160265/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 06:27:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4c93023dcab7b162162a83dd4482644e6be42f8a49113398bfa4b7404160265/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 06:27:15 compute-0 podman[123729]: 2025-11-29 06:27:14.927922891 +0000 UTC m=+0.029558890 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:27:15 compute-0 podman[123729]: 2025-11-29 06:27:15.030153628 +0000 UTC m=+0.131789737 container init 0747636d192b2d529b27a855b0e927d8204b42da46b638236d6129bb2bf41b85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_benz, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507)
Nov 29 06:27:15 compute-0 podman[123729]: 2025-11-29 06:27:15.03842831 +0000 UTC m=+0.140064329 container start 0747636d192b2d529b27a855b0e927d8204b42da46b638236d6129bb2bf41b85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_benz, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 06:27:15 compute-0 podman[123729]: 2025-11-29 06:27:15.046233439 +0000 UTC m=+0.147869418 container attach 0747636d192b2d529b27a855b0e927d8204b42da46b638236d6129bb2bf41b85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_benz, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 06:27:15 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:27:15 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:27:15 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:27:15.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:27:15 compute-0 quirky_benz[123746]: --> passed data devices: 0 physical, 1 LVM
Nov 29 06:27:15 compute-0 quirky_benz[123746]: --> relative data size: 1.0
Nov 29 06:27:15 compute-0 quirky_benz[123746]: --> All data devices are unavailable
Nov 29 06:27:15 compute-0 systemd[1]: libpod-0747636d192b2d529b27a855b0e927d8204b42da46b638236d6129bb2bf41b85.scope: Deactivated successfully.
Nov 29 06:27:15 compute-0 podman[123729]: 2025-11-29 06:27:15.872050948 +0000 UTC m=+0.973686957 container died 0747636d192b2d529b27a855b0e927d8204b42da46b638236d6129bb2bf41b85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_benz, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 29 06:27:15 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:27:16 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:27:16 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:27:16 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:27:16.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:27:16 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v445: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:27:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-e4c93023dcab7b162162a83dd4482644e6be42f8a49113398bfa4b7404160265-merged.mount: Deactivated successfully.
Nov 29 06:27:16 compute-0 podman[123729]: 2025-11-29 06:27:16.818689328 +0000 UTC m=+1.920325347 container remove 0747636d192b2d529b27a855b0e927d8204b42da46b638236d6129bb2bf41b85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_benz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 29 06:27:16 compute-0 systemd[1]: libpod-conmon-0747636d192b2d529b27a855b0e927d8204b42da46b638236d6129bb2bf41b85.scope: Deactivated successfully.
Nov 29 06:27:16 compute-0 sudo[123604]: pam_unix(sudo:session): session closed for user root
Nov 29 06:27:16 compute-0 sudo[123776]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:27:16 compute-0 sudo[123776]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:27:16 compute-0 sudo[123776]: pam_unix(sudo:session): session closed for user root
Nov 29 06:27:16 compute-0 sudo[123801]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:27:16 compute-0 sudo[123801]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:27:16 compute-0 sudo[123801]: pam_unix(sudo:session): session closed for user root
Nov 29 06:27:17 compute-0 sudo[123826]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:27:17 compute-0 sudo[123826]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:27:17 compute-0 sudo[123826]: pam_unix(sudo:session): session closed for user root
Nov 29 06:27:17 compute-0 sudo[123851]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -- lvm list --format json
Nov 29 06:27:17 compute-0 sudo[123851]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:27:17 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:27:17 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:27:17 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:27:17.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:27:17 compute-0 podman[123917]: 2025-11-29 06:27:17.45418851 +0000 UTC m=+0.048603384 container create 26ccd5860eaefba84be01fc67628bc099294f3c85560f0a2d6bd9db2fdbceb61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_moser, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 06:27:17 compute-0 systemd[1]: Started libpod-conmon-26ccd5860eaefba84be01fc67628bc099294f3c85560f0a2d6bd9db2fdbceb61.scope.
Nov 29 06:27:17 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:27:17 compute-0 podman[123917]: 2025-11-29 06:27:17.429201599 +0000 UTC m=+0.023616493 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:27:17 compute-0 podman[123917]: 2025-11-29 06:27:17.543297199 +0000 UTC m=+0.137712083 container init 26ccd5860eaefba84be01fc67628bc099294f3c85560f0a2d6bd9db2fdbceb61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_moser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 06:27:17 compute-0 podman[123917]: 2025-11-29 06:27:17.548924397 +0000 UTC m=+0.143339271 container start 26ccd5860eaefba84be01fc67628bc099294f3c85560f0a2d6bd9db2fdbceb61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_moser, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 06:27:17 compute-0 silly_moser[123933]: 167 167
Nov 29 06:27:17 compute-0 systemd[1]: libpod-26ccd5860eaefba84be01fc67628bc099294f3c85560f0a2d6bd9db2fdbceb61.scope: Deactivated successfully.
Nov 29 06:27:17 compute-0 podman[123917]: 2025-11-29 06:27:17.554574655 +0000 UTC m=+0.148989549 container attach 26ccd5860eaefba84be01fc67628bc099294f3c85560f0a2d6bd9db2fdbceb61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_moser, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True)
Nov 29 06:27:17 compute-0 podman[123917]: 2025-11-29 06:27:17.554939636 +0000 UTC m=+0.149354510 container died 26ccd5860eaefba84be01fc67628bc099294f3c85560f0a2d6bd9db2fdbceb61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_moser, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 06:27:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-f390cddfadcf191abad773eb9bfb10c331fbcc824ac4088e5533abb40c700ba0-merged.mount: Deactivated successfully.
Nov 29 06:27:17 compute-0 podman[123917]: 2025-11-29 06:27:17.633383035 +0000 UTC m=+0.227797939 container remove 26ccd5860eaefba84be01fc67628bc099294f3c85560f0a2d6bd9db2fdbceb61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_moser, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 06:27:17 compute-0 systemd[1]: libpod-conmon-26ccd5860eaefba84be01fc67628bc099294f3c85560f0a2d6bd9db2fdbceb61.scope: Deactivated successfully.
Nov 29 06:27:17 compute-0 podman[123958]: 2025-11-29 06:27:17.858445787 +0000 UTC m=+0.070299432 container create aa46d186d6553c0ba1bda172501161693be6f7a9f21b2e6e215932811d4e0f79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_shockley, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 06:27:17 compute-0 systemd[1]: Started libpod-conmon-aa46d186d6553c0ba1bda172501161693be6f7a9f21b2e6e215932811d4e0f79.scope.
Nov 29 06:27:17 compute-0 podman[123958]: 2025-11-29 06:27:17.829589638 +0000 UTC m=+0.041443343 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:27:17 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:27:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5753ba008559f702702c2056518e3b895ea2b63d7d13e17ad257cb8ae40edd5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 06:27:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5753ba008559f702702c2056518e3b895ea2b63d7d13e17ad257cb8ae40edd5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:27:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5753ba008559f702702c2056518e3b895ea2b63d7d13e17ad257cb8ae40edd5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:27:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5753ba008559f702702c2056518e3b895ea2b63d7d13e17ad257cb8ae40edd5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 06:27:17 compute-0 podman[123958]: 2025-11-29 06:27:17.952970308 +0000 UTC m=+0.164823953 container init aa46d186d6553c0ba1bda172501161693be6f7a9f21b2e6e215932811d4e0f79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_shockley, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 29 06:27:17 compute-0 podman[123958]: 2025-11-29 06:27:17.965400207 +0000 UTC m=+0.177253852 container start aa46d186d6553c0ba1bda172501161693be6f7a9f21b2e6e215932811d4e0f79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_shockley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 06:27:17 compute-0 podman[123958]: 2025-11-29 06:27:17.969713318 +0000 UTC m=+0.181566963 container attach aa46d186d6553c0ba1bda172501161693be6f7a9f21b2e6e215932811d4e0f79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_shockley, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 06:27:18 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:27:18 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:27:18 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:27:18.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:27:18 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v446: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:27:18 compute-0 stupefied_shockley[123974]: {
Nov 29 06:27:18 compute-0 stupefied_shockley[123974]:     "1": [
Nov 29 06:27:18 compute-0 stupefied_shockley[123974]:         {
Nov 29 06:27:18 compute-0 stupefied_shockley[123974]:             "devices": [
Nov 29 06:27:18 compute-0 stupefied_shockley[123974]:                 "/dev/loop3"
Nov 29 06:27:18 compute-0 stupefied_shockley[123974]:             ],
Nov 29 06:27:18 compute-0 stupefied_shockley[123974]:             "lv_name": "ceph_lv0",
Nov 29 06:27:18 compute-0 stupefied_shockley[123974]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 06:27:18 compute-0 stupefied_shockley[123974]:             "lv_size": "7511998464",
Nov 29 06:27:18 compute-0 stupefied_shockley[123974]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=336ec58c-893b-528f-a0c1-6ed1196bc047,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=91f280f1-e534-4adc-bf70-98711580c2dd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 06:27:18 compute-0 stupefied_shockley[123974]:             "lv_uuid": "G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP",
Nov 29 06:27:18 compute-0 stupefied_shockley[123974]:             "name": "ceph_lv0",
Nov 29 06:27:18 compute-0 stupefied_shockley[123974]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 06:27:18 compute-0 stupefied_shockley[123974]:             "tags": {
Nov 29 06:27:18 compute-0 stupefied_shockley[123974]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 06:27:18 compute-0 stupefied_shockley[123974]:                 "ceph.block_uuid": "G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP",
Nov 29 06:27:18 compute-0 stupefied_shockley[123974]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 06:27:18 compute-0 stupefied_shockley[123974]:                 "ceph.cluster_fsid": "336ec58c-893b-528f-a0c1-6ed1196bc047",
Nov 29 06:27:18 compute-0 stupefied_shockley[123974]:                 "ceph.cluster_name": "ceph",
Nov 29 06:27:18 compute-0 stupefied_shockley[123974]:                 "ceph.crush_device_class": "",
Nov 29 06:27:18 compute-0 stupefied_shockley[123974]:                 "ceph.encrypted": "0",
Nov 29 06:27:18 compute-0 stupefied_shockley[123974]:                 "ceph.osd_fsid": "91f280f1-e534-4adc-bf70-98711580c2dd",
Nov 29 06:27:18 compute-0 stupefied_shockley[123974]:                 "ceph.osd_id": "1",
Nov 29 06:27:18 compute-0 stupefied_shockley[123974]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 06:27:18 compute-0 stupefied_shockley[123974]:                 "ceph.type": "block",
Nov 29 06:27:18 compute-0 stupefied_shockley[123974]:                 "ceph.vdo": "0"
Nov 29 06:27:18 compute-0 stupefied_shockley[123974]:             },
Nov 29 06:27:18 compute-0 stupefied_shockley[123974]:             "type": "block",
Nov 29 06:27:18 compute-0 stupefied_shockley[123974]:             "vg_name": "ceph_vg0"
Nov 29 06:27:18 compute-0 stupefied_shockley[123974]:         }
Nov 29 06:27:18 compute-0 stupefied_shockley[123974]:     ]
Nov 29 06:27:18 compute-0 stupefied_shockley[123974]: }
Nov 29 06:27:18 compute-0 systemd[1]: libpod-aa46d186d6553c0ba1bda172501161693be6f7a9f21b2e6e215932811d4e0f79.scope: Deactivated successfully.
Nov 29 06:27:18 compute-0 podman[123958]: 2025-11-29 06:27:18.815131357 +0000 UTC m=+1.026985002 container died aa46d186d6553c0ba1bda172501161693be6f7a9f21b2e6e215932811d4e0f79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_shockley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 29 06:27:18 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 06:27:18 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:27:18 compute-0 ceph-mon[74654]: pgmap v444: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:27:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-c5753ba008559f702702c2056518e3b895ea2b63d7d13e17ad257cb8ae40edd5-merged.mount: Deactivated successfully.
Nov 29 06:27:18 compute-0 podman[123958]: 2025-11-29 06:27:18.932242262 +0000 UTC m=+1.144095887 container remove aa46d186d6553c0ba1bda172501161693be6f7a9f21b2e6e215932811d4e0f79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_shockley, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 06:27:18 compute-0 systemd[1]: libpod-conmon-aa46d186d6553c0ba1bda172501161693be6f7a9f21b2e6e215932811d4e0f79.scope: Deactivated successfully.
Nov 29 06:27:18 compute-0 sudo[123851]: pam_unix(sudo:session): session closed for user root
Nov 29 06:27:19 compute-0 sudo[123997]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:27:19 compute-0 sudo[123997]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:27:19 compute-0 sudo[123997]: pam_unix(sudo:session): session closed for user root
Nov 29 06:27:19 compute-0 sudo[124022]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:27:19 compute-0 sudo[124022]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:27:19 compute-0 sudo[124022]: pam_unix(sudo:session): session closed for user root
Nov 29 06:27:19 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:27:19 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:27:19 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:27:19.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:27:19 compute-0 sudo[124047]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:27:19 compute-0 sudo[124047]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:27:19 compute-0 sudo[124047]: pam_unix(sudo:session): session closed for user root
Nov 29 06:27:19 compute-0 sudo[124072]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -- raw list --format json
Nov 29 06:27:19 compute-0 sudo[124072]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:27:19 compute-0 podman[124137]: 2025-11-29 06:27:19.550314576 +0000 UTC m=+0.037763870 container create 66beb996dc107cd053d45a49bfa7d1d29b889a295a52b778d092ff10e885eeac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_dewdney, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 29 06:27:19 compute-0 systemd[1]: Started libpod-conmon-66beb996dc107cd053d45a49bfa7d1d29b889a295a52b778d092ff10e885eeac.scope.
Nov 29 06:27:19 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:27:19 compute-0 podman[124137]: 2025-11-29 06:27:19.615590677 +0000 UTC m=+0.103040061 container init 66beb996dc107cd053d45a49bfa7d1d29b889a295a52b778d092ff10e885eeac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_dewdney, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 29 06:27:19 compute-0 podman[124137]: 2025-11-29 06:27:19.622052518 +0000 UTC m=+0.109501812 container start 66beb996dc107cd053d45a49bfa7d1d29b889a295a52b778d092ff10e885eeac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_dewdney, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 29 06:27:19 compute-0 sweet_dewdney[124153]: 167 167
Nov 29 06:27:19 compute-0 systemd[1]: libpod-66beb996dc107cd053d45a49bfa7d1d29b889a295a52b778d092ff10e885eeac.scope: Deactivated successfully.
Nov 29 06:27:19 compute-0 podman[124137]: 2025-11-29 06:27:19.625294299 +0000 UTC m=+0.112743623 container attach 66beb996dc107cd053d45a49bfa7d1d29b889a295a52b778d092ff10e885eeac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_dewdney, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 29 06:27:19 compute-0 podman[124137]: 2025-11-29 06:27:19.626291977 +0000 UTC m=+0.113741271 container died 66beb996dc107cd053d45a49bfa7d1d29b889a295a52b778d092ff10e885eeac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_dewdney, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 06:27:19 compute-0 podman[124137]: 2025-11-29 06:27:19.533326509 +0000 UTC m=+0.020775823 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:27:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-61a6595820a20621f1204b3d269d698e651b0429ab87774726b654c319fb5d06-merged.mount: Deactivated successfully.
Nov 29 06:27:19 compute-0 podman[124137]: 2025-11-29 06:27:19.697528485 +0000 UTC m=+0.184977799 container remove 66beb996dc107cd053d45a49bfa7d1d29b889a295a52b778d092ff10e885eeac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_dewdney, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3)
Nov 29 06:27:19 compute-0 systemd[1]: libpod-conmon-66beb996dc107cd053d45a49bfa7d1d29b889a295a52b778d092ff10e885eeac.scope: Deactivated successfully.
Nov 29 06:27:19 compute-0 podman[124180]: 2025-11-29 06:27:19.873643724 +0000 UTC m=+0.043046658 container create b2cfe003d1fb54f536bf69f3b19dd87008e81cd7975591477928554b3fa24e21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_hellman, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 06:27:19 compute-0 systemd[1]: Started libpod-conmon-b2cfe003d1fb54f536bf69f3b19dd87008e81cd7975591477928554b3fa24e21.scope.
Nov 29 06:27:19 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:27:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83a963cc094f94be44e913a5076da3c9e00c0af6f36d5d79025e532cc2d7867e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 06:27:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83a963cc094f94be44e913a5076da3c9e00c0af6f36d5d79025e532cc2d7867e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:27:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83a963cc094f94be44e913a5076da3c9e00c0af6f36d5d79025e532cc2d7867e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:27:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83a963cc094f94be44e913a5076da3c9e00c0af6f36d5d79025e532cc2d7867e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 06:27:19 compute-0 podman[124180]: 2025-11-29 06:27:19.943214905 +0000 UTC m=+0.112617839 container init b2cfe003d1fb54f536bf69f3b19dd87008e81cd7975591477928554b3fa24e21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_hellman, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 29 06:27:19 compute-0 podman[124180]: 2025-11-29 06:27:19.852037778 +0000 UTC m=+0.021440742 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:27:19 compute-0 podman[124180]: 2025-11-29 06:27:19.952602728 +0000 UTC m=+0.122005662 container start b2cfe003d1fb54f536bf69f3b19dd87008e81cd7975591477928554b3fa24e21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_hellman, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 06:27:19 compute-0 podman[124180]: 2025-11-29 06:27:19.958990707 +0000 UTC m=+0.128393651 container attach b2cfe003d1fb54f536bf69f3b19dd87008e81cd7975591477928554b3fa24e21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_hellman, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 06:27:19 compute-0 ceph-mon[74654]: pgmap v445: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:27:19 compute-0 ceph-mon[74654]: pgmap v446: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:27:20 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:27:20 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:27:20 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:27:20.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:27:20 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v447: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:27:20 compute-0 gallant_hellman[124196]: {
Nov 29 06:27:20 compute-0 gallant_hellman[124196]:     "91f280f1-e534-4adc-bf70-98711580c2dd": {
Nov 29 06:27:20 compute-0 gallant_hellman[124196]:         "ceph_fsid": "336ec58c-893b-528f-a0c1-6ed1196bc047",
Nov 29 06:27:20 compute-0 gallant_hellman[124196]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 06:27:20 compute-0 gallant_hellman[124196]:         "osd_id": 1,
Nov 29 06:27:20 compute-0 gallant_hellman[124196]:         "osd_uuid": "91f280f1-e534-4adc-bf70-98711580c2dd",
Nov 29 06:27:20 compute-0 gallant_hellman[124196]:         "type": "bluestore"
Nov 29 06:27:20 compute-0 gallant_hellman[124196]:     }
Nov 29 06:27:20 compute-0 gallant_hellman[124196]: }
Nov 29 06:27:20 compute-0 systemd[1]: libpod-b2cfe003d1fb54f536bf69f3b19dd87008e81cd7975591477928554b3fa24e21.scope: Deactivated successfully.
Nov 29 06:27:20 compute-0 podman[124180]: 2025-11-29 06:27:20.802574526 +0000 UTC m=+0.971977650 container died b2cfe003d1fb54f536bf69f3b19dd87008e81cd7975591477928554b3fa24e21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_hellman, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 29 06:27:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-83a963cc094f94be44e913a5076da3c9e00c0af6f36d5d79025e532cc2d7867e-merged.mount: Deactivated successfully.
Nov 29 06:27:20 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:27:20 compute-0 podman[124180]: 2025-11-29 06:27:20.927766557 +0000 UTC m=+1.097169531 container remove b2cfe003d1fb54f536bf69f3b19dd87008e81cd7975591477928554b3fa24e21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_hellman, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 29 06:27:20 compute-0 systemd[1]: libpod-conmon-b2cfe003d1fb54f536bf69f3b19dd87008e81cd7975591477928554b3fa24e21.scope: Deactivated successfully.
Nov 29 06:27:20 compute-0 sudo[124072]: pam_unix(sudo:session): session closed for user root
Nov 29 06:27:20 compute-0 sshd-session[124230]: Accepted publickey for zuul from 192.168.122.30 port 49516 ssh2: ECDSA SHA256:q0RMlXdalxA6snNWza7TmIndlwLWLLpO+sXhiGKqO/I
Nov 29 06:27:20 compute-0 systemd-logind[797]: New session 41 of user zuul.
Nov 29 06:27:21 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 06:27:21 compute-0 systemd[1]: Started Session 41 of User zuul.
Nov 29 06:27:21 compute-0 sshd-session[124230]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 06:27:21 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:27:21 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:27:21 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:27:21.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:27:21 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:27:21 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 06:27:21 compute-0 ceph-mon[74654]: pgmap v447: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:27:21 compute-0 sudo[124385]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hbqrbrnxunoydhbygyltvjhnxeufsxwx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397641.117973-23-194703721531553/AnsiballZ_tempfile.py'
Nov 29 06:27:21 compute-0 sudo[124385]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:27:21 compute-0 python3.9[124387]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Nov 29 06:27:21 compute-0 sudo[124385]: pam_unix(sudo:session): session closed for user root
Nov 29 06:27:21 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:27:21 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev 097b7174-1e34-4025-beab-e4816f31426c does not exist
Nov 29 06:27:21 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev 3d80ede6-bb50-4904-ace1-ae7f8815591e does not exist
Nov 29 06:27:21 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev ca77568d-54fe-462f-9425-9c5ee7e1767c does not exist
Nov 29 06:27:22 compute-0 sudo[124389]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:27:22 compute-0 sudo[124393]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:27:22 compute-0 sudo[124389]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:27:22 compute-0 sudo[124393]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:27:22 compute-0 sudo[124389]: pam_unix(sudo:session): session closed for user root
Nov 29 06:27:22 compute-0 sudo[124393]: pam_unix(sudo:session): session closed for user root
Nov 29 06:27:22 compute-0 sudo[124462]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:27:22 compute-0 sudo[124462]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:27:22 compute-0 sudo[124463]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 06:27:22 compute-0 sudo[124462]: pam_unix(sudo:session): session closed for user root
Nov 29 06:27:22 compute-0 sudo[124463]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:27:22 compute-0 sudo[124463]: pam_unix(sudo:session): session closed for user root
Nov 29 06:27:22 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:27:22 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:27:22 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:27:22.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:27:22 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v448: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:27:22 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:27:22 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:27:22 compute-0 sudo[124637]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fgyuebykqtsvucaaksnqrmmhdkgqcrtu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397642.135344-59-23602540464088/AnsiballZ_stat.py'
Nov 29 06:27:22 compute-0 sudo[124637]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:27:22 compute-0 python3.9[124639]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 06:27:22 compute-0 sudo[124637]: pam_unix(sudo:session): session closed for user root
Nov 29 06:27:23 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:27:23 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:27:23 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:27:23.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:27:23 compute-0 sudo[124792]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qupjrprfqhaxomexhlhndoacpfczqnye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397643.0440223-83-262982161759787/AnsiballZ_slurp.py'
Nov 29 06:27:23 compute-0 sudo[124792]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:27:23 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 29 06:27:23 compute-0 python3.9[124794]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Nov 29 06:27:23 compute-0 sudo[124792]: pam_unix(sudo:session): session closed for user root
Nov 29 06:27:24 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:27:24 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:27:24 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:27:24.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:27:24 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v449: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:27:24 compute-0 sudo[124946]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mcwfzgxjsdpvcfwvuyypnihennzpmlzi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397643.9081905-107-104677036883407/AnsiballZ_stat.py'
Nov 29 06:27:24 compute-0 sudo[124946]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:27:24 compute-0 ceph-mon[74654]: pgmap v448: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:27:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:27:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:27:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:27:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:27:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:27:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:27:24 compute-0 python3.9[124948]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.1zi7txwq follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:27:24 compute-0 sudo[124946]: pam_unix(sudo:session): session closed for user root
Nov 29 06:27:24 compute-0 sudo[125072]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mwjaefjjspnzvauykkshwghimowrvwez ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397643.9081905-107-104677036883407/AnsiballZ_copy.py'
Nov 29 06:27:25 compute-0 sudo[125072]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:27:25 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:27:25 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:27:25 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:27:25.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:27:25 compute-0 python3.9[125074]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.1zi7txwq mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764397643.9081905-107-104677036883407/.source.1zi7txwq _original_basename=.f0owt40_ follow=False checksum=b291f010aefff8b88f41011b780271a83fd1182f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:27:25 compute-0 sudo[125072]: pam_unix(sudo:session): session closed for user root
Nov 29 06:27:25 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:27:26 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:27:26 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:27:26 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:27:26.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:27:26 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v450: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:27:26 compute-0 sudo[125224]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ierannvgzihsdigrnvuawhllgzmxumhl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397645.4311602-152-210003695167042/AnsiballZ_setup.py'
Nov 29 06:27:26 compute-0 sudo[125224]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:27:26 compute-0 python3.9[125226]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 06:27:26 compute-0 sudo[125224]: pam_unix(sudo:session): session closed for user root
Nov 29 06:27:27 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:27:27 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:27:27 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:27:27.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:27:27 compute-0 sudo[125377]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ipgpzzuanntlcupifnhsgfqnihdihfrp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397646.8629577-177-132872844631118/AnsiballZ_blockinfile.py'
Nov 29 06:27:27 compute-0 sudo[125377]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:27:27 compute-0 python3.9[125379]: ansible-ansible.builtin.blockinfile Invoked with block=compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC2GXKCQiCwQEMihcSwDVeJtG2CpTemmA6MTbtOkxbB3OAV5PK8v8imPvDGMDurfGFQG0RzWyv9szlMJXdgIkwejIfy/AY7p6nemHOpu6DdAx0EA/jg1YcOIeeEhyMw1/oFzjYClGMohaI1oTKHtR29UXWphTAroOkf26Exvco6hh2ApRTXV9ObzSoOyCC7+OZcOWgYzdoCfu/0FDGkH2ksKLQS7d4AAh/XZ/njXhK57U7ptxHCReUPECGRv7KB4f8TelZDAIeUyp7ngd/9ivUDO1zue1Qr9ECzTzAFqippGXFmYl3+oSid03CY7bqnxav4xWt7UukbaO57goyIPfkklPdC1kA7kZqa9bqeDU1WgDkqnLu8hluArB0Y0Jz+hDfx9pTbAL6MklraoLaGrnrgcibAollAN+7WGqdWxUotENYaljO7P1Z18MlNllWFzk4Le5jMLNL8qArSlzM+ufOThnLdGEuYZhH1x969AisGQ4MQWn0P0lZFu6fE5VSNA/k=
                                             compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDdPWx5WoFJTxz6PiFZL5f3XrtE682RjGFiIpoe0LXZO
                                             compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFQlZMweHfLYiJFtm1r2tQze/oNx6KzgaXkK+Kof7POk0cFMLbTsXU8qgbQMh4o5LVO0Hbas4mAqxRkGcFCg2Po=
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCX0dhB1m0xL0qEi5jnTQLLB4bvueVV5foNrqU/OkfV/4gRyp7uP2q21lWq5Dtl2GLk51pS6oD41RI41Y5g7OSRs8b1Z66d6X1QgX0Qns6pv7FwmNSQ25+2VGV6lppnaN5e+JHiwTmzpf82hl/MiiJrHo7B63mllKyl9SZJxUhP9RR4czS3QNYQsZyP7sZeCWothTZ2Q/GK4BWBEtj2+ifeOpa342IivopCH05YVQOx9bpsdFHMYaalMDCwvr2lfVns8aTcpJ3z9uE8wLdKWTyiinT7nuLX6RuPwhXB2proBRH1wrGSIUgcVcizkWn8QizD8LlsGFcHIQJkmq+sJz6r7cCZLIfS6hdAzI+hYbJie6n/agwfxe4r+mbXsmmC6ALKKk7CEnaiNnDg0fgTaUfBPwSfu+JmVrjdSO+S8f/CMbtYeO6QknOxhLV9oK6knszv7nLlSYXTzXanHkN4Y0fW3dsSvoE+qDR0YijbbT8slqMd6z95wWVDFUmTcN8Nzk8=
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILci1PI4hoB56+xxS5gSMKceuJ/dv6t7etpmtENwoSFr
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJIaOLr2ntjSUcigXC7a0sFoonsuh0ChCx2a1R6G8EDmJ8/ZB8NEiJE6KAQJDNU5XsXjuaC44eJhOUMRK9r98xA=
                                             compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDUVpPatup3d17omeiTdJaYR8jCcDbraJSPBxWy49Wxst4G+6/lD41HVIKmjgCgIbbmYSFBPQmoXt4gFXP4FRKna6AbQWi0kwF3/T2biQ2qCid0HVDSS8YRVlyrpdVc1/bIg6YNLkGnhzOMp0S1443+cg5PqutAbrAT1LOg6lSBu+K9gIqJ4un3l2guSweoyba5UhMyjrq4Pffx1QCuBggtYSjmA9Q1r5VVNc2J7AbP0QuzOe6J6DhpdGJsfmHDVXZb/4b/aPUdCTKkLseyUtcqElWVhhnGnpYSJdN81ejalSktGHE4JRHih19wwTokiKvoczUgijBzOfl+kt2ELcpDgzpzY0M9yd0Zz7wrK4rLM6hi8x3LYZXZv8N7KnawUcJ2jfzilx1BVLdNzgwDNB7ZlP4O9Vs3fKnBufCUFPNcRyWl6ooczepbgxqgSbr/Ham2O4/qzvJmzLtu0KxBkaFALRWnyM39nYVE/jrMKJ5ihtVDxIY9FGma/Jifg15gqI0=
                                             compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIN19pK3a7AH/OiwlqJTVWP/qzU/QzkC16s4D1xY1Vn6J
                                             compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLsXsjJNPVMX1YVTe2oBmcZpUSiv3HOeuICgZtQun4hTopMXH9dE1jQeUruGwqZ+NsKW6X2bLZZJ0/tcn2owL8Q=
                                              create=True mode=0644 path=/tmp/ansible.1zi7txwq state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:27:27 compute-0 sudo[125377]: pam_unix(sudo:session): session closed for user root
Nov 29 06:27:28 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:27:28 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:27:28 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:27:28.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:27:28 compute-0 sudo[125529]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqwnlcawueaklsdsjtbsgpnlctinjqpb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397647.732723-201-42150303552059/AnsiballZ_command.py'
Nov 29 06:27:28 compute-0 sudo[125529]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:27:28 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v451: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:27:28 compute-0 python3.9[125531]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.1zi7txwq' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:27:28 compute-0 sudo[125529]: pam_unix(sudo:session): session closed for user root
Nov 29 06:27:28 compute-0 ceph-mon[74654]: pgmap v449: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:27:29 compute-0 sudo[125684]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mkjxfytcrwfqicnaudqkdkrcfuopqitq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397648.636074-225-232097895108984/AnsiballZ_file.py'
Nov 29 06:27:29 compute-0 sudo[125684]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:27:29 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:27:29 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:27:29 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:27:29.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:27:29 compute-0 python3.9[125686]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.1zi7txwq state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:27:29 compute-0 sudo[125684]: pam_unix(sudo:session): session closed for user root
Nov 29 06:27:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 06:27:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 06:27:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 06:27:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 06:27:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 06:27:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 06:27:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 06:27:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 06:27:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 06:27:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 06:27:30 compute-0 sshd-session[124235]: Connection closed by 192.168.122.30 port 49516
Nov 29 06:27:30 compute-0 sshd-session[124230]: pam_unix(sshd:session): session closed for user zuul
Nov 29 06:27:30 compute-0 systemd[1]: session-41.scope: Deactivated successfully.
Nov 29 06:27:30 compute-0 systemd[1]: session-41.scope: Consumed 5.231s CPU time.
Nov 29 06:27:30 compute-0 systemd-logind[797]: Session 41 logged out. Waiting for processes to exit.
Nov 29 06:27:30 compute-0 systemd-logind[797]: Removed session 41.
Nov 29 06:27:30 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:27:30 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:27:30 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:27:30.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:27:30 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v452: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:27:30 compute-0 ceph-mon[74654]: pgmap v450: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:27:30 compute-0 ceph-mon[74654]: pgmap v451: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:27:30 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:27:31 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:27:31 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:27:31 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:27:31.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:27:31 compute-0 ceph-mon[74654]: pgmap v452: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:27:32 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:27:32 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:27:32 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:27:32.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:27:32 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v453: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:27:33 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:27:33 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:27:33 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:27:33.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:27:33 compute-0 ceph-mon[74654]: pgmap v453: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:27:33 compute-0 sshd-session[125713]: Invalid user deployer from 138.124.186.225 port 48174
Nov 29 06:27:33 compute-0 sshd-session[125713]: Received disconnect from 138.124.186.225 port 48174:11: Bye Bye [preauth]
Nov 29 06:27:33 compute-0 sshd-session[125713]: Disconnected from invalid user deployer 138.124.186.225 port 48174 [preauth]
Nov 29 06:27:34 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:27:34 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:27:34 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:27:34.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:27:34 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v454: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:27:34 compute-0 sshd-session[125715]: Received disconnect from 193.46.255.217 port 46458:11:  [preauth]
Nov 29 06:27:34 compute-0 sshd-session[125715]: Disconnected from authenticating user root 193.46.255.217 port 46458 [preauth]
Nov 29 06:27:34 compute-0 sshd-session[125718]: Accepted publickey for zuul from 192.168.122.30 port 42158 ssh2: ECDSA SHA256:q0RMlXdalxA6snNWza7TmIndlwLWLLpO+sXhiGKqO/I
Nov 29 06:27:34 compute-0 systemd-logind[797]: New session 42 of user zuul.
Nov 29 06:27:34 compute-0 systemd[1]: Started Session 42 of User zuul.
Nov 29 06:27:34 compute-0 sshd-session[125718]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 06:27:35 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:27:35 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:27:35 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:27:35.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:27:35 compute-0 ceph-mon[74654]: pgmap v454: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:27:35 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:27:36 compute-0 python3.9[125871]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 06:27:36 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:27:36 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:27:36 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:27:36.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:27:36 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v455: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:27:37 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:27:37 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:27:37 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:27:37.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:27:37 compute-0 sudo[126028]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-emzabdiwjgsomcmsebkjsvwznucrgxnt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397656.4903538-61-29700750093532/AnsiballZ_systemd.py'
Nov 29 06:27:37 compute-0 sudo[126028]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:27:37 compute-0 sshd-session[125876]: Invalid user ansadmin from 49.247.35.31 port 21620
Nov 29 06:27:37 compute-0 sshd-session[125876]: Received disconnect from 49.247.35.31 port 21620:11: Bye Bye [preauth]
Nov 29 06:27:37 compute-0 sshd-session[125876]: Disconnected from invalid user ansadmin 49.247.35.31 port 21620 [preauth]
Nov 29 06:27:37 compute-0 python3.9[126030]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Nov 29 06:27:37 compute-0 sudo[126028]: pam_unix(sudo:session): session closed for user root
Nov 29 06:27:38 compute-0 sudo[126182]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uijqpneaafjfiqgucfzfrsvnpipmwhlu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397657.7841344-85-107255599668871/AnsiballZ_systemd.py'
Nov 29 06:27:38 compute-0 sudo[126182]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:27:38 compute-0 ceph-mon[74654]: pgmap v455: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:27:38 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:27:38 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:27:38 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:27:38.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:27:38 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v456: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:27:38 compute-0 python3.9[126184]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 06:27:38 compute-0 sudo[126182]: pam_unix(sudo:session): session closed for user root
Nov 29 06:27:39 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:27:39 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:27:39 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:27:39.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:27:39 compute-0 sudo[126336]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lxbxwhrfalzaljmzhjynqtdqfolczgec ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397658.7367787-112-82411951789959/AnsiballZ_command.py'
Nov 29 06:27:39 compute-0 sudo[126336]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:27:39 compute-0 python3.9[126338]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:27:39 compute-0 sudo[126336]: pam_unix(sudo:session): session closed for user root
Nov 29 06:27:40 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:27:40 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:27:40 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:27:40.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:27:40 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v457: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:27:40 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:27:41 compute-0 ceph-mon[74654]: pgmap v456: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:27:41 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:27:41 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:27:41 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:27:41.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:27:41 compute-0 sudo[126492]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dqxzzjdfebjgegwejjewqedyrzcuaicp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397660.7203822-136-278349749066304/AnsiballZ_stat.py'
Nov 29 06:27:41 compute-0 sudo[126492]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:27:41 compute-0 python3.9[126494]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 06:27:41 compute-0 sudo[126492]: pam_unix(sudo:session): session closed for user root
Nov 29 06:27:42 compute-0 sudo[126594]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:27:42 compute-0 sudo[126594]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:27:42 compute-0 sudo[126594]: pam_unix(sudo:session): session closed for user root
Nov 29 06:27:42 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:27:42 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:27:42 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:27:42.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:27:42 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v458: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:27:42 compute-0 sudo[126692]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uvcqpbqbzsgppoptlhumswwcpovhtkeg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397661.74299-163-36806653314581/AnsiballZ_file.py'
Nov 29 06:27:42 compute-0 sudo[126647]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:27:42 compute-0 sudo[126692]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:27:42 compute-0 sudo[126647]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:27:42 compute-0 sudo[126647]: pam_unix(sudo:session): session closed for user root
Nov 29 06:27:42 compute-0 sshd-session[126423]: Received disconnect from 104.208.108.166 port 51042:11: Bye Bye [preauth]
Nov 29 06:27:42 compute-0 sshd-session[126423]: Disconnected from authenticating user root 104.208.108.166 port 51042 [preauth]
Nov 29 06:27:42 compute-0 python3.9[126695]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:27:42 compute-0 sudo[126692]: pam_unix(sudo:session): session closed for user root
Nov 29 06:27:42 compute-0 sshd-session[125721]: Connection closed by 192.168.122.30 port 42158
Nov 29 06:27:42 compute-0 sshd-session[125718]: pam_unix(sshd:session): session closed for user zuul
Nov 29 06:27:42 compute-0 systemd[1]: session-42.scope: Deactivated successfully.
Nov 29 06:27:42 compute-0 systemd[1]: session-42.scope: Consumed 4.337s CPU time.
Nov 29 06:27:42 compute-0 systemd-logind[797]: Session 42 logged out. Waiting for processes to exit.
Nov 29 06:27:42 compute-0 systemd-logind[797]: Removed session 42.
Nov 29 06:27:43 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:27:43 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:27:43 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:27:43.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:27:43 compute-0 ceph-mon[74654]: pgmap v457: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:27:44 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:27:44 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:27:44 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:27:44.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:27:44 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v459: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:27:45 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:27:45 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:27:45 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:27:45.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:27:45 compute-0 sshd-session[126723]: Invalid user usuario1 from 115.190.37.201 port 45262
Nov 29 06:27:45 compute-0 sshd-session[126723]: Received disconnect from 115.190.37.201 port 45262:11: Bye Bye [preauth]
Nov 29 06:27:45 compute-0 sshd-session[126723]: Disconnected from invalid user usuario1 115.190.37.201 port 45262 [preauth]
Nov 29 06:27:45 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:27:46 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:27:46 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:27:46 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:27:46.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:27:46 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v460: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:27:47 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:27:47 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:27:47 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:27:47.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:27:47 compute-0 ceph-mon[74654]: pgmap v458: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:27:48 compute-0 sshd-session[126729]: Accepted publickey for zuul from 192.168.122.30 port 39130 ssh2: ECDSA SHA256:q0RMlXdalxA6snNWza7TmIndlwLWLLpO+sXhiGKqO/I
Nov 29 06:27:48 compute-0 systemd-logind[797]: New session 43 of user zuul.
Nov 29 06:27:48 compute-0 systemd[1]: Started Session 43 of User zuul.
Nov 29 06:27:48 compute-0 sshd-session[126729]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 06:27:48 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:27:48 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:27:48 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:27:48.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:27:48 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v461: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:27:49 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:27:49 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:27:49 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:27:49.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:27:49 compute-0 python3.9[126882]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 06:27:50 compute-0 sudo[127038]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xfayzdejaxwdvosgeibzsugmbndrebgn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397669.7025518-67-37926951901643/AnsiballZ_setup.py'
Nov 29 06:27:50 compute-0 sudo[127038]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:27:50 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:27:50 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:27:50 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:27:50.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:27:50 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v462: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:27:50 compute-0 python3.9[127040]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 06:27:50 compute-0 sudo[127038]: pam_unix(sudo:session): session closed for user root
Nov 29 06:27:50 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:27:51 compute-0 sudo[127123]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nknqdgrghdnpbyyzvumtktanxececssh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397669.7025518-67-37926951901643/AnsiballZ_dnf.py'
Nov 29 06:27:51 compute-0 sudo[127123]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:27:51 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:27:51 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:27:51 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:27:51.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:27:51 compute-0 python3.9[127125]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 29 06:27:52 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:27:52 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:27:52 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:27:52.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:27:52 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v463: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:27:52 compute-0 sudo[127123]: pam_unix(sudo:session): session closed for user root
Nov 29 06:27:53 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:27:53 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:27:53 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:27:53.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:27:53 compute-0 sshd-session[127127]: Invalid user localhost from 103.147.159.91 port 53210
Nov 29 06:27:53 compute-0 sshd-session[127127]: Received disconnect from 103.147.159.91 port 53210:11: Bye Bye [preauth]
Nov 29 06:27:53 compute-0 sshd-session[127127]: Disconnected from invalid user localhost 103.147.159.91 port 53210 [preauth]
Nov 29 06:27:54 compute-0 ceph-mgr[74948]: [balancer INFO root] Optimize plan auto_2025-11-29_06:27:54
Nov 29 06:27:54 compute-0 ceph-mgr[74948]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 06:27:54 compute-0 ceph-mgr[74948]: [balancer INFO root] do_upmap
Nov 29 06:27:54 compute-0 ceph-mgr[74948]: [balancer INFO root] pools ['default.rgw.meta', 'backups', 'cephfs.cephfs.data', '.mgr', '.rgw.root', 'default.rgw.control', 'vms', 'volumes', 'cephfs.cephfs.meta', 'images', 'default.rgw.log']
Nov 29 06:27:54 compute-0 ceph-mgr[74948]: [balancer INFO root] prepared 0/10 changes
Nov 29 06:27:54 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:27:54 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:27:54 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:27:54.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:27:54 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v464: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:27:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:27:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:27:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:27:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:27:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:27:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:27:55 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:27:55 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:27:55 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:27:55.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:27:55 compute-0 sshd-session[127154]: Received disconnect from 79.116.35.29 port 45500:11: Bye Bye [preauth]
Nov 29 06:27:55 compute-0 sshd-session[127154]: Disconnected from authenticating user root 79.116.35.29 port 45500 [preauth]
Nov 29 06:27:55 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:27:56 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v465: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:27:56 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:27:56 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:27:56 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:27:56.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:27:57 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:27:57 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:27:57 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:27:57.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:27:58 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v466: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:27:58 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:27:58 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:27:58 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:27:58.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:27:59 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:27:59 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:27:59 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:27:59.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:27:59 compute-0 ceph-mon[74654]: pgmap v459: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:27:59 compute-0 ceph-mon[74654]: pgmap v460: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:28:00 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v467: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:28:00 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:28:00 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:28:00 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:28:00.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:28:00 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:28:01 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:28:01 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:28:01 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:28:01.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:28:02 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v468: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:28:02 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:28:02 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:28:02 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:28:02.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:28:02 compute-0 python3.9[127285]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:28:02 compute-0 sudo[127286]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:28:02 compute-0 sudo[127286]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:28:02 compute-0 sudo[127286]: pam_unix(sudo:session): session closed for user root
Nov 29 06:28:02 compute-0 sudo[127312]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:28:02 compute-0 sudo[127312]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:28:02 compute-0 sudo[127312]: pam_unix(sudo:session): session closed for user root
Nov 29 06:28:03 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:28:03 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:28:03 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:28:03.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:28:03 compute-0 ceph-mon[74654]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Nov 29 06:28:03 compute-0 ceph-mon[74654]: paxos.0).electionLogic(19) init, last seen epoch 19, mid-election, bumping
Nov 29 06:28:03 compute-0 ceph-mon[74654]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 06:28:03 compute-0 python3.9[127487]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 29 06:28:03 compute-0 ceph-mon[74654]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Nov 29 06:28:04 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Nov 29 06:28:04 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 06:28:04 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.gxdwyy=up:active} 2 up:standby
Nov 29 06:28:04 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e139: 3 total, 3 up, 3 in
Nov 29 06:28:04 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : mgrmap e10: compute-0.vxabpq(active, since 11m), standbys: compute-2.ngsyhe, compute-1.gaxpay
Nov 29 06:28:04 compute-0 ceph-mon[74654]: log_channel(cluster) log [INF] : overall HEALTH_OK
Nov 29 06:28:04 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v469: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:28:04 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:28:04 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:28:04 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:28:04.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:28:04 compute-0 python3.9[127637]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 06:28:05 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:28:05 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:28:05 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:28:05.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:28:05 compute-0 python3.9[127788]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 06:28:05 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:28:06 compute-0 sshd-session[126732]: Connection closed by 192.168.122.30 port 39130
Nov 29 06:28:06 compute-0 sshd-session[126729]: pam_unix(sshd:session): session closed for user zuul
Nov 29 06:28:06 compute-0 systemd[1]: session-43.scope: Deactivated successfully.
Nov 29 06:28:06 compute-0 systemd[1]: session-43.scope: Consumed 6.084s CPU time.
Nov 29 06:28:06 compute-0 systemd-logind[797]: Session 43 logged out. Waiting for processes to exit.
Nov 29 06:28:06 compute-0 systemd-logind[797]: Removed session 43.
Nov 29 06:28:06 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v470: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:28:06 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:28:06 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:28:06 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:28:06.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:28:07 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:28:07 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:28:07 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:28:07.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:28:08 compute-0 sshd-session[127814]: Invalid user ec2-user from 176.109.67.96 port 60420
Nov 29 06:28:08 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v471: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:28:08 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:28:08 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:28:08 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:28:08.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:28:08 compute-0 sshd-session[127814]: Received disconnect from 176.109.67.96 port 60420:11: Bye Bye [preauth]
Nov 29 06:28:08 compute-0 sshd-session[127814]: Disconnected from invalid user ec2-user 176.109.67.96 port 60420 [preauth]
Nov 29 06:28:08 compute-0 ceph-mon[74654]: pgmap v467: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:28:08 compute-0 ceph-mon[74654]: pgmap v468: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:28:08 compute-0 ceph-mon[74654]: mon.compute-2 calling monitor election
Nov 29 06:28:08 compute-0 ceph-mon[74654]: mon.compute-0 calling monitor election
Nov 29 06:28:08 compute-0 ceph-mon[74654]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Nov 29 06:28:08 compute-0 ceph-mon[74654]: monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Nov 29 06:28:08 compute-0 ceph-mon[74654]: fsmap cephfs:1 {0=cephfs.compute-2.gxdwyy=up:active} 2 up:standby
Nov 29 06:28:08 compute-0 ceph-mon[74654]: osdmap e139: 3 total, 3 up, 3 in
Nov 29 06:28:08 compute-0 ceph-mon[74654]: mgrmap e10: compute-0.vxabpq(active, since 11m), standbys: compute-2.ngsyhe, compute-1.gaxpay
Nov 29 06:28:08 compute-0 ceph-mon[74654]: overall HEALTH_OK
Nov 29 06:28:08 compute-0 ceph-mon[74654]: pgmap v469: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:28:09 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:28:09 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:28:09 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:28:09.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:28:09 compute-0 ceph-mon[74654]: pgmap v470: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:28:09 compute-0 ceph-mon[74654]: pgmap v471: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:28:10 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v472: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:28:10 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:28:10 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:28:10 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:28:10.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:28:10 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:28:11 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:28:11 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:28:11 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:28:11.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:28:11 compute-0 sshd-session[127818]: Accepted publickey for zuul from 192.168.122.30 port 55988 ssh2: ECDSA SHA256:q0RMlXdalxA6snNWza7TmIndlwLWLLpO+sXhiGKqO/I
Nov 29 06:28:11 compute-0 systemd-logind[797]: New session 44 of user zuul.
Nov 29 06:28:11 compute-0 systemd[1]: Started Session 44 of User zuul.
Nov 29 06:28:11 compute-0 sshd-session[127818]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 06:28:12 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v473: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:28:12 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:28:12 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:28:12 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:28:12.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:28:12 compute-0 python3.9[127971]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 06:28:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 06:28:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:28:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 06:28:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:28:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:28:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:28:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:28:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:28:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:28:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:28:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:28:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:28:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 06:28:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:28:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:28:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:28:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Nov 29 06:28:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:28:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 06:28:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:28:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:28:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:28:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 06:28:12 compute-0 ceph-mon[74654]: pgmap v472: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:28:13 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:28:13 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:28:13 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:28:13.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:28:14 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v474: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:28:14 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:28:14 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:28:14 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:28:14.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:28:14 compute-0 sudo[128128]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-circqhfxgygrmnjdbtdgltqvhocqompg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397693.8843126-115-278065654525386/AnsiballZ_file.py'
Nov 29 06:28:14 compute-0 sudo[128128]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:28:14 compute-0 sshd-session[128053]: Received disconnect from 162.214.92.14 port 34948:11: Bye Bye [preauth]
Nov 29 06:28:14 compute-0 sshd-session[128053]: Disconnected from authenticating user root 162.214.92.14 port 34948 [preauth]
Nov 29 06:28:14 compute-0 python3.9[128130]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 06:28:14 compute-0 sudo[128128]: pam_unix(sudo:session): session closed for user root
Nov 29 06:28:14 compute-0 ceph-mon[74654]: pgmap v473: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:28:15 compute-0 sudo[128281]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lyfwhtsfbhqzwhfezmvxkgxcsdrycjoz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397694.7143068-115-132539265205154/AnsiballZ_file.py'
Nov 29 06:28:15 compute-0 sudo[128281]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:28:15 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:28:15 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:28:15 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:28:15.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:28:15 compute-0 python3.9[128283]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 06:28:15 compute-0 sudo[128281]: pam_unix(sudo:session): session closed for user root
Nov 29 06:28:15 compute-0 sudo[128433]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nwydyzhcfxodwfjuksunbivwjxtlnmsh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397695.4621618-161-248581819783985/AnsiballZ_stat.py'
Nov 29 06:28:15 compute-0 sudo[128433]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:28:16 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:28:16 compute-0 python3.9[128435]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:28:16 compute-0 sudo[128433]: pam_unix(sudo:session): session closed for user root
Nov 29 06:28:16 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v475: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:28:16 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:28:16 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:28:16 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:28:16.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:28:16 compute-0 ceph-mon[74654]: pgmap v474: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:28:16 compute-0 sudo[128556]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zxtvubdurumtffrelbepvephteicbttd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397695.4621618-161-248581819783985/AnsiballZ_copy.py'
Nov 29 06:28:16 compute-0 sudo[128556]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:28:17 compute-0 python3.9[128558]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764397695.4621618-161-248581819783985/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=8468ae915c8d555809e81a9f592f94c05f7bce7a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:28:17 compute-0 sudo[128556]: pam_unix(sudo:session): session closed for user root
Nov 29 06:28:17 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:28:17 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:28:17 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:28:17.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:28:17 compute-0 sudo[128709]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jxmncugtewoptezkuqscktndxjjcqwbr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397697.2204843-161-232337174261202/AnsiballZ_stat.py'
Nov 29 06:28:17 compute-0 sudo[128709]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:28:17 compute-0 python3.9[128711]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:28:17 compute-0 sudo[128709]: pam_unix(sudo:session): session closed for user root
Nov 29 06:28:18 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v476: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:28:18 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:28:18 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:28:18 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:28:18.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:28:18 compute-0 sudo[128832]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mymfphziqilcfmhsvzloxjpgloculiir ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397697.2204843-161-232337174261202/AnsiballZ_copy.py'
Nov 29 06:28:18 compute-0 sudo[128832]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:28:18 compute-0 python3.9[128834]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764397697.2204843-161-232337174261202/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=03c2952c2692ca442730881904078ac3e566f340 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:28:18 compute-0 sudo[128832]: pam_unix(sudo:session): session closed for user root
Nov 29 06:28:19 compute-0 sudo[128985]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjjpafxxzaopfxxyhogspzduuphablsn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397698.755487-161-277762538179837/AnsiballZ_stat.py'
Nov 29 06:28:19 compute-0 sudo[128985]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:28:19 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:28:19 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:28:19 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:28:19.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:28:19 compute-0 python3.9[128987]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:28:19 compute-0 sudo[128985]: pam_unix(sudo:session): session closed for user root
Nov 29 06:28:19 compute-0 sudo[129108]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hiributzkcneqmpeefmidaysypybqxxw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397698.755487-161-277762538179837/AnsiballZ_copy.py'
Nov 29 06:28:19 compute-0 sudo[129108]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:28:20 compute-0 python3.9[129110]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764397698.755487-161-277762538179837/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=a644651d7a189f3c2f7043d8997cdf89e60c7bd2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:28:20 compute-0 sudo[129108]: pam_unix(sudo:session): session closed for user root
Nov 29 06:28:20 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v477: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:28:20 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:28:20 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:28:20 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:28:20.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:28:20 compute-0 sudo[129260]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ngnepiklvdsukxtpxmzbgvwofiyvbzib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397700.2979712-313-53913557958002/AnsiballZ_file.py'
Nov 29 06:28:20 compute-0 sudo[129260]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:28:20 compute-0 python3.9[129262]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 06:28:20 compute-0 sudo[129260]: pam_unix(sudo:session): session closed for user root
Nov 29 06:28:21 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:28:21 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:28:21 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:28:21 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:28:21.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:28:21 compute-0 sudo[129413]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sjjogbgqrwcedjifvpygrfnveiligwpt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397701.1265812-313-63996077836365/AnsiballZ_file.py'
Nov 29 06:28:21 compute-0 sudo[129413]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:28:21 compute-0 python3.9[129415]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 06:28:21 compute-0 sudo[129413]: pam_unix(sudo:session): session closed for user root
Nov 29 06:28:22 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v478: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:28:22 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:28:22 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:28:22 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:28:22.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:28:22 compute-0 sudo[129565]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gxcbmirnpuykxjjsdaebkqlmiafqmcbr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397701.9347355-350-153542495019964/AnsiballZ_stat.py'
Nov 29 06:28:22 compute-0 sudo[129565]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:28:22 compute-0 sudo[129568]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:28:22 compute-0 sudo[129568]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:28:22 compute-0 sudo[129568]: pam_unix(sudo:session): session closed for user root
Nov 29 06:28:22 compute-0 sudo[129585]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:28:22 compute-0 sudo[129585]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:28:22 compute-0 python3.9[129567]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:28:22 compute-0 sudo[129585]: pam_unix(sudo:session): session closed for user root
Nov 29 06:28:22 compute-0 sudo[129565]: pam_unix(sudo:session): session closed for user root
Nov 29 06:28:22 compute-0 sudo[129614]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:28:22 compute-0 sudo[129614]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:28:22 compute-0 sudo[129614]: pam_unix(sudo:session): session closed for user root
Nov 29 06:28:22 compute-0 sudo[129638]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:28:22 compute-0 sudo[129638]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:28:22 compute-0 sudo[129638]: pam_unix(sudo:session): session closed for user root
Nov 29 06:28:22 compute-0 sudo[129670]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:28:22 compute-0 sudo[129670]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:28:22 compute-0 sudo[129670]: pam_unix(sudo:session): session closed for user root
Nov 29 06:28:22 compute-0 sudo[129721]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Nov 29 06:28:22 compute-0 sudo[129721]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:28:22 compute-0 sudo[129853]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cjajlkxznajcqoimfbqwcqemwwpgeios ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397701.9347355-350-153542495019964/AnsiballZ_copy.py'
Nov 29 06:28:22 compute-0 sudo[129853]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:28:22 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 06:28:22 compute-0 sudo[129721]: pam_unix(sudo:session): session closed for user root
Nov 29 06:28:22 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 06:28:23 compute-0 python3.9[129857]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764397701.9347355-350-153542495019964/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=aefce5813a5a721e088ba4838a64c39201165a8e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:28:23 compute-0 sudo[129853]: pam_unix(sudo:session): session closed for user root
Nov 29 06:28:23 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:28:23 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:28:23 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:28:23.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:28:23 compute-0 ceph-mon[74654]: pgmap v475: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:28:23 compute-0 sudo[130011]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dwmabvukiggcqepvuxcfqeylklmcypik ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397703.308839-350-70183169409384/AnsiballZ_stat.py'
Nov 29 06:28:23 compute-0 sudo[130011]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:28:23 compute-0 python3.9[130013]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:28:23 compute-0 sudo[130011]: pam_unix(sudo:session): session closed for user root
Nov 29 06:28:23 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:28:23 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 06:28:24 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:28:24 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 06:28:24 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v479: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:28:24 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:28:24 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:28:24 compute-0 sudo[130134]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gpibbjwbdacowdsnoglwznsudcglwhsr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397703.308839-350-70183169409384/AnsiballZ_copy.py'
Nov 29 06:28:24 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:28:24.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:28:24 compute-0 sudo[130134]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:28:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:28:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:28:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:28:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:28:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:28:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:28:24 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:28:24 compute-0 python3.9[130136]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764397703.308839-350-70183169409384/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=446989bd92736b57ebc923ce429d8effafd00e68 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:28:24 compute-0 sudo[130134]: pam_unix(sudo:session): session closed for user root
Nov 29 06:28:24 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 06:28:25 compute-0 ceph-mon[74654]: pgmap v476: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:28:25 compute-0 ceph-mon[74654]: pgmap v477: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:28:25 compute-0 ceph-mon[74654]: pgmap v478: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:28:25 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:28:25 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:28:25 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:28:25 compute-0 sudo[130291]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qnlqbsinvulzaturevftzuoqetsferla ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397704.6557233-350-240121221266407/AnsiballZ_stat.py'
Nov 29 06:28:25 compute-0 sudo[130291]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:28:25 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:28:25 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 06:28:25 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:28:25 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:28:25 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:28:25.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:28:25 compute-0 sudo[130287]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:28:25 compute-0 sudo[130287]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:28:25 compute-0 sudo[130287]: pam_unix(sudo:session): session closed for user root
Nov 29 06:28:25 compute-0 sudo[130315]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:28:25 compute-0 sudo[130315]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:28:25 compute-0 sudo[130315]: pam_unix(sudo:session): session closed for user root
Nov 29 06:28:25 compute-0 sudo[130340]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:28:25 compute-0 sudo[130340]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:28:25 compute-0 sudo[130340]: pam_unix(sudo:session): session closed for user root
Nov 29 06:28:25 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:28:25 compute-0 python3.9[130306]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:28:25 compute-0 sudo[130291]: pam_unix(sudo:session): session closed for user root
Nov 29 06:28:25 compute-0 sudo[130365]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 06:28:25 compute-0 sudo[130365]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:28:25 compute-0 sudo[130525]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-regepgtmvnhsnwnxtkggmprwzfapkskx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397704.6557233-350-240121221266407/AnsiballZ_copy.py'
Nov 29 06:28:25 compute-0 sudo[130525]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:28:25 compute-0 sudo[130365]: pam_unix(sudo:session): session closed for user root
Nov 29 06:28:25 compute-0 python3.9[130529]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764397704.6557233-350-240121221266407/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=ae6872864caab8d678a666cf230eafbe2b2e1e47 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:28:25 compute-0 sudo[130525]: pam_unix(sudo:session): session closed for user root
Nov 29 06:28:25 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 06:28:25 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:28:25 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 06:28:25 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 06:28:25 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 06:28:26 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:28:26 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:28:26 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev 92a96287-3c5e-43fd-ab4b-b2d05a07e5e8 does not exist
Nov 29 06:28:26 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev 7143ed74-3866-491b-90e0-d5351a358d03 does not exist
Nov 29 06:28:26 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev f202a588-3d41-4acf-b71d-dbe2d35f56ba does not exist
Nov 29 06:28:26 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 06:28:26 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 06:28:26 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 06:28:26 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 06:28:26 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 06:28:26 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:28:26 compute-0 sudo[130568]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:28:26 compute-0 sudo[130568]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:28:26 compute-0 sudo[130568]: pam_unix(sudo:session): session closed for user root
Nov 29 06:28:26 compute-0 ceph-mon[74654]: pgmap v479: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:28:26 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:28:26 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:28:26 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:28:26 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:28:26 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:28:26 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 06:28:26 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:28:26 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 06:28:26 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 06:28:26 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:28:26 compute-0 sudo[130616]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:28:26 compute-0 sudo[130616]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:28:26 compute-0 sudo[130616]: pam_unix(sudo:session): session closed for user root
Nov 29 06:28:26 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v480: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:28:26 compute-0 sudo[130670]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:28:26 compute-0 sudo[130670]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:28:26 compute-0 sudo[130670]: pam_unix(sudo:session): session closed for user root
Nov 29 06:28:26 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:28:26 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:28:26 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:28:26.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:28:26 compute-0 sudo[130718]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Nov 29 06:28:26 compute-0 sudo[130718]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:28:26 compute-0 sudo[130793]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ibjzzxstpdimvndnqxtixogpbokxxocc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397706.1440458-476-239451494155378/AnsiballZ_file.py'
Nov 29 06:28:26 compute-0 sudo[130793]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:28:26 compute-0 python3.9[130795]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 06:28:26 compute-0 sudo[130793]: pam_unix(sudo:session): session closed for user root
Nov 29 06:28:26 compute-0 podman[130837]: 2025-11-29 06:28:26.62195352 +0000 UTC m=+0.043503285 container create 5f5a3165ec0b04c129450e78ccb71e28a1b860a214d5e3c8f7f601cba4b7be0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_driscoll, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 06:28:26 compute-0 systemd[1]: Started libpod-conmon-5f5a3165ec0b04c129450e78ccb71e28a1b860a214d5e3c8f7f601cba4b7be0a.scope.
Nov 29 06:28:26 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:28:26 compute-0 podman[130837]: 2025-11-29 06:28:26.60689763 +0000 UTC m=+0.028447415 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:28:26 compute-0 podman[130837]: 2025-11-29 06:28:26.867780192 +0000 UTC m=+0.289330047 container init 5f5a3165ec0b04c129450e78ccb71e28a1b860a214d5e3c8f7f601cba4b7be0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_driscoll, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 06:28:26 compute-0 podman[130837]: 2025-11-29 06:28:26.881751532 +0000 UTC m=+0.303301337 container start 5f5a3165ec0b04c129450e78ccb71e28a1b860a214d5e3c8f7f601cba4b7be0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_driscoll, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 06:28:26 compute-0 frosty_driscoll[130877]: 167 167
Nov 29 06:28:26 compute-0 systemd[1]: libpod-5f5a3165ec0b04c129450e78ccb71e28a1b860a214d5e3c8f7f601cba4b7be0a.scope: Deactivated successfully.
Nov 29 06:28:26 compute-0 conmon[130877]: conmon 5f5a3165ec0b04c12945 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5f5a3165ec0b04c129450e78ccb71e28a1b860a214d5e3c8f7f601cba4b7be0a.scope/container/memory.events
Nov 29 06:28:27 compute-0 sudo[131019]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zmdxhwrqngsguwyptueudmpntpirzpaf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397706.7249148-476-31348704304265/AnsiballZ_file.py'
Nov 29 06:28:27 compute-0 sudo[131019]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:28:27 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:28:27 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:28:27 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:28:27.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:28:27 compute-0 python3.9[131021]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 06:28:27 compute-0 podman[130837]: 2025-11-29 06:28:27.305249798 +0000 UTC m=+0.726799583 container attach 5f5a3165ec0b04c129450e78ccb71e28a1b860a214d5e3c8f7f601cba4b7be0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_driscoll, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 29 06:28:27 compute-0 podman[130837]: 2025-11-29 06:28:27.306140203 +0000 UTC m=+0.727689978 container died 5f5a3165ec0b04c129450e78ccb71e28a1b860a214d5e3c8f7f601cba4b7be0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_driscoll, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 29 06:28:27 compute-0 sudo[131019]: pam_unix(sudo:session): session closed for user root
Nov 29 06:28:27 compute-0 sudo[131172]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qlaoqbyrahnjrnspbjabtcuqzsjptyvo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397707.5365577-526-259588650306240/AnsiballZ_stat.py'
Nov 29 06:28:27 compute-0 sudo[131172]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:28:28 compute-0 python3.9[131174]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:28:28 compute-0 sudo[131172]: pam_unix(sudo:session): session closed for user root
Nov 29 06:28:28 compute-0 sshd-session[130960]: Invalid user cloudera from 118.193.39.127 port 38426
Nov 29 06:28:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-7c0b60d040d5400b0e561120842b21ff4d52d947effc3c5b7cd27fe126208ad0-merged.mount: Deactivated successfully.
Nov 29 06:28:28 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v481: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:28:28 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:28:28 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:28:28 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:28:28.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:28:28 compute-0 sshd-session[130960]: Received disconnect from 118.193.39.127 port 38426:11: Bye Bye [preauth]
Nov 29 06:28:28 compute-0 sshd-session[130960]: Disconnected from invalid user cloudera 118.193.39.127 port 38426 [preauth]
Nov 29 06:28:28 compute-0 sudo[131295]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cqlbeyiqnxgzauzmttarjxagmjbppsoc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397707.5365577-526-259588650306240/AnsiballZ_copy.py'
Nov 29 06:28:28 compute-0 sudo[131295]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:28:28 compute-0 python3.9[131297]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764397707.5365577-526-259588650306240/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=740ccfe5daa9c5421ca02e98e83fd489994437b6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:28:28 compute-0 sudo[131295]: pam_unix(sudo:session): session closed for user root
Nov 29 06:28:28 compute-0 podman[130837]: 2025-11-29 06:28:28.955618813 +0000 UTC m=+2.377168618 container remove 5f5a3165ec0b04c129450e78ccb71e28a1b860a214d5e3c8f7f601cba4b7be0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_driscoll, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 29 06:28:28 compute-0 systemd[1]: libpod-conmon-5f5a3165ec0b04c129450e78ccb71e28a1b860a214d5e3c8f7f601cba4b7be0a.scope: Deactivated successfully.
Nov 29 06:28:29 compute-0 ceph-mon[74654]: pgmap v480: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:28:29 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:28:29 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:28:29 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:28:29.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:28:29 compute-0 podman[131351]: 2025-11-29 06:28:29.129430246 +0000 UTC m=+0.025391217 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:28:29 compute-0 podman[131351]: 2025-11-29 06:28:29.333222676 +0000 UTC m=+0.229183637 container create 17685d9a241417f0456581e3932acd41676797ccc160946af4742ca12b77548a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_proskuriakova, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 06:28:29 compute-0 systemd[1]: Started libpod-conmon-17685d9a241417f0456581e3932acd41676797ccc160946af4742ca12b77548a.scope.
Nov 29 06:28:29 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:28:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2d3f777d4fb70987ced7cef1b546a828f4d93cdb8dfe32798c8dd0e4173e272/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 06:28:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2d3f777d4fb70987ced7cef1b546a828f4d93cdb8dfe32798c8dd0e4173e272/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:28:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2d3f777d4fb70987ced7cef1b546a828f4d93cdb8dfe32798c8dd0e4173e272/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:28:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2d3f777d4fb70987ced7cef1b546a828f4d93cdb8dfe32798c8dd0e4173e272/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 06:28:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2d3f777d4fb70987ced7cef1b546a828f4d93cdb8dfe32798c8dd0e4173e272/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 06:28:29 compute-0 sudo[131474]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uinzlqjpfzclhtoxnikhjsrkmzxrrand ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397709.1010518-526-226613274559076/AnsiballZ_stat.py'
Nov 29 06:28:29 compute-0 podman[131351]: 2025-11-29 06:28:29.478262156 +0000 UTC m=+0.374223097 container init 17685d9a241417f0456581e3932acd41676797ccc160946af4742ca12b77548a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_proskuriakova, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 06:28:29 compute-0 sudo[131474]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:28:29 compute-0 podman[131351]: 2025-11-29 06:28:29.489742354 +0000 UTC m=+0.385703275 container start 17685d9a241417f0456581e3932acd41676797ccc160946af4742ca12b77548a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_proskuriakova, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 29 06:28:29 compute-0 podman[131351]: 2025-11-29 06:28:29.493611295 +0000 UTC m=+0.389572236 container attach 17685d9a241417f0456581e3932acd41676797ccc160946af4742ca12b77548a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_proskuriakova, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 29 06:28:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 06:28:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 06:28:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 06:28:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 06:28:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 06:28:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 06:28:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 06:28:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 06:28:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 06:28:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 06:28:29 compute-0 python3.9[131477]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:28:29 compute-0 sudo[131474]: pam_unix(sudo:session): session closed for user root
Nov 29 06:28:30 compute-0 sudo[131603]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vowvpjseqpgikdinfxbafbxxgsqwjbvo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397709.1010518-526-226613274559076/AnsiballZ_copy.py'
Nov 29 06:28:30 compute-0 sudo[131603]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:28:30 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v482: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:28:30 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:28:30 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:28:30 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:28:30.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:28:30 compute-0 gifted_proskuriakova[131445]: --> passed data devices: 0 physical, 1 LVM
Nov 29 06:28:30 compute-0 gifted_proskuriakova[131445]: --> relative data size: 1.0
Nov 29 06:28:30 compute-0 gifted_proskuriakova[131445]: --> All data devices are unavailable
Nov 29 06:28:30 compute-0 systemd[1]: libpod-17685d9a241417f0456581e3932acd41676797ccc160946af4742ca12b77548a.scope: Deactivated successfully.
Nov 29 06:28:30 compute-0 podman[131351]: 2025-11-29 06:28:30.340508883 +0000 UTC m=+1.236469844 container died 17685d9a241417f0456581e3932acd41676797ccc160946af4742ca12b77548a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_proskuriakova, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 29 06:28:30 compute-0 python3.9[131607]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764397709.1010518-526-226613274559076/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=446989bd92736b57ebc923ce429d8effafd00e68 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:28:30 compute-0 sudo[131603]: pam_unix(sudo:session): session closed for user root
Nov 29 06:28:30 compute-0 sudo[131772]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-upzcukcubzfhozjtdvyqeudtbxvmfuug ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397710.633576-526-264266063279056/AnsiballZ_stat.py'
Nov 29 06:28:30 compute-0 sudo[131772]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:28:31 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:28:31 compute-0 python3.9[131774]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:28:31 compute-0 sudo[131772]: pam_unix(sudo:session): session closed for user root
Nov 29 06:28:31 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:28:31 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:28:31 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:28:31.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:28:31 compute-0 ceph-mon[74654]: pgmap v481: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:28:31 compute-0 sudo[131896]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vqynihyykoruwqwsbjbmrmmdhhpilcij ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397710.633576-526-264266063279056/AnsiballZ_copy.py'
Nov 29 06:28:31 compute-0 sudo[131896]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:28:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-d2d3f777d4fb70987ced7cef1b546a828f4d93cdb8dfe32798c8dd0e4173e272-merged.mount: Deactivated successfully.
Nov 29 06:28:31 compute-0 python3.9[131898]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764397710.633576-526-264266063279056/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=75b192adbbcf3b531af652912e1c620c8b2fc70c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:28:31 compute-0 sudo[131896]: pam_unix(sudo:session): session closed for user root
Nov 29 06:28:32 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v483: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:28:32 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:28:32 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:28:32 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:28:32.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:28:32 compute-0 podman[131351]: 2025-11-29 06:28:32.492220871 +0000 UTC m=+3.388181832 container remove 17685d9a241417f0456581e3932acd41676797ccc160946af4742ca12b77548a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_proskuriakova, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507)
Nov 29 06:28:32 compute-0 systemd[1]: libpod-conmon-17685d9a241417f0456581e3932acd41676797ccc160946af4742ca12b77548a.scope: Deactivated successfully.
Nov 29 06:28:32 compute-0 sudo[130718]: pam_unix(sudo:session): session closed for user root
Nov 29 06:28:32 compute-0 sudo[131923]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:28:32 compute-0 sudo[131923]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:28:32 compute-0 sudo[131923]: pam_unix(sudo:session): session closed for user root
Nov 29 06:28:32 compute-0 sudo[131948]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:28:32 compute-0 sudo[131948]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:28:32 compute-0 sudo[131948]: pam_unix(sudo:session): session closed for user root
Nov 29 06:28:32 compute-0 sudo[131996]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:28:32 compute-0 sudo[131996]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:28:32 compute-0 sudo[131996]: pam_unix(sudo:session): session closed for user root
Nov 29 06:28:32 compute-0 sudo[132050]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -- lvm list --format json
Nov 29 06:28:32 compute-0 sudo[132050]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:28:32 compute-0 sudo[132162]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tpwkbailxtlyianrsahlmzgnlbagjkjh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397712.6742063-705-117629346352443/AnsiballZ_file.py'
Nov 29 06:28:32 compute-0 sudo[132162]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:28:33 compute-0 python3.9[132170]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 06:28:33 compute-0 sudo[132162]: pam_unix(sudo:session): session closed for user root
Nov 29 06:28:33 compute-0 podman[132192]: 2025-11-29 06:28:33.180583495 +0000 UTC m=+0.105518270 container create 73caf0b8b4784900e9ab29cd3de5b74e630d03b85e2b88c236c9a3b047ab4e3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_wescoff, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 06:28:33 compute-0 podman[132192]: 2025-11-29 06:28:33.100501764 +0000 UTC m=+0.025436589 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:28:33 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:28:33 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:28:33 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:28:33.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:28:33 compute-0 systemd[1]: Started libpod-conmon-73caf0b8b4784900e9ab29cd3de5b74e630d03b85e2b88c236c9a3b047ab4e3f.scope.
Nov 29 06:28:33 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:28:33 compute-0 sudo[132361]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zmwfriwrfpozziewpjcjyfatmxvwollk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397713.3477743-737-150065775857074/AnsiballZ_stat.py'
Nov 29 06:28:33 compute-0 sudo[132361]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:28:33 compute-0 podman[132192]: 2025-11-29 06:28:33.662597755 +0000 UTC m=+0.587532580 container init 73caf0b8b4784900e9ab29cd3de5b74e630d03b85e2b88c236c9a3b047ab4e3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_wescoff, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 06:28:33 compute-0 podman[132192]: 2025-11-29 06:28:33.673786045 +0000 UTC m=+0.598720820 container start 73caf0b8b4784900e9ab29cd3de5b74e630d03b85e2b88c236c9a3b047ab4e3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_wescoff, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 06:28:33 compute-0 youthful_wescoff[132285]: 167 167
Nov 29 06:28:33 compute-0 systemd[1]: libpod-73caf0b8b4784900e9ab29cd3de5b74e630d03b85e2b88c236c9a3b047ab4e3f.scope: Deactivated successfully.
Nov 29 06:28:33 compute-0 podman[132192]: 2025-11-29 06:28:33.683453991 +0000 UTC m=+0.608388826 container attach 73caf0b8b4784900e9ab29cd3de5b74e630d03b85e2b88c236c9a3b047ab4e3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_wescoff, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 06:28:33 compute-0 podman[132192]: 2025-11-29 06:28:33.685054627 +0000 UTC m=+0.609989402 container died 73caf0b8b4784900e9ab29cd3de5b74e630d03b85e2b88c236c9a3b047ab4e3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_wescoff, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 29 06:28:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-c12106a6a9fed0c3f19631172e254e3440f258f6689b0ca50f1e60144af69c08-merged.mount: Deactivated successfully.
Nov 29 06:28:33 compute-0 podman[132192]: 2025-11-29 06:28:33.785512171 +0000 UTC m=+0.710446946 container remove 73caf0b8b4784900e9ab29cd3de5b74e630d03b85e2b88c236c9a3b047ab4e3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_wescoff, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 06:28:33 compute-0 python3.9[132363]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:28:33 compute-0 systemd[1]: libpod-conmon-73caf0b8b4784900e9ab29cd3de5b74e630d03b85e2b88c236c9a3b047ab4e3f.scope: Deactivated successfully.
Nov 29 06:28:33 compute-0 sudo[132361]: pam_unix(sudo:session): session closed for user root
Nov 29 06:28:34 compute-0 podman[132409]: 2025-11-29 06:28:33.943344226 +0000 UTC m=+0.022833195 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:28:34 compute-0 podman[132409]: 2025-11-29 06:28:34.046571479 +0000 UTC m=+0.126060478 container create 5a7b220b6cd17f083d86d15477a98c9ffce91c7fce8306af47e4b462acab2ffc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_shamir, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 06:28:34 compute-0 ceph-mon[74654]: pgmap v482: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:28:34 compute-0 systemd[1]: Started libpod-conmon-5a7b220b6cd17f083d86d15477a98c9ffce91c7fce8306af47e4b462acab2ffc.scope.
Nov 29 06:28:34 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:28:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a47128314c6321e968747d3a965ec696fecd8f155dcbbb2d58eb89a71158c353/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 06:28:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a47128314c6321e968747d3a965ec696fecd8f155dcbbb2d58eb89a71158c353/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:28:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a47128314c6321e968747d3a965ec696fecd8f155dcbbb2d58eb89a71158c353/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:28:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a47128314c6321e968747d3a965ec696fecd8f155dcbbb2d58eb89a71158c353/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 06:28:34 compute-0 sudo[132525]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pukjhzudimcohvfarhtlmrrbdgpygigo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397713.3477743-737-150065775857074/AnsiballZ_copy.py'
Nov 29 06:28:34 compute-0 sudo[132525]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:28:34 compute-0 podman[132409]: 2025-11-29 06:28:34.207649557 +0000 UTC m=+0.287138536 container init 5a7b220b6cd17f083d86d15477a98c9ffce91c7fce8306af47e4b462acab2ffc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_shamir, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 06:28:34 compute-0 podman[132409]: 2025-11-29 06:28:34.215402079 +0000 UTC m=+0.294891088 container start 5a7b220b6cd17f083d86d15477a98c9ffce91c7fce8306af47e4b462acab2ffc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_shamir, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 29 06:28:34 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v484: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:28:34 compute-0 podman[132409]: 2025-11-29 06:28:34.263042122 +0000 UTC m=+0.342531131 container attach 5a7b220b6cd17f083d86d15477a98c9ffce91c7fce8306af47e4b462acab2ffc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_shamir, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 06:28:34 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:28:34 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:28:34 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:28:34.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:28:34 compute-0 python3.9[132527]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764397713.3477743-737-150065775857074/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=3385b01217fece5877d0a0cc7f45f60761b1d6d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:28:34 compute-0 sudo[132525]: pam_unix(sudo:session): session closed for user root
Nov 29 06:28:34 compute-0 sudo[132682]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oejohcoxpiwtxaefibjhyqqmcqopwkfo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397714.6324754-782-13259753377514/AnsiballZ_file.py'
Nov 29 06:28:34 compute-0 sudo[132682]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:28:34 compute-0 quizzical_shamir[132496]: {
Nov 29 06:28:34 compute-0 quizzical_shamir[132496]:     "1": [
Nov 29 06:28:34 compute-0 quizzical_shamir[132496]:         {
Nov 29 06:28:34 compute-0 quizzical_shamir[132496]:             "devices": [
Nov 29 06:28:34 compute-0 quizzical_shamir[132496]:                 "/dev/loop3"
Nov 29 06:28:34 compute-0 quizzical_shamir[132496]:             ],
Nov 29 06:28:34 compute-0 quizzical_shamir[132496]:             "lv_name": "ceph_lv0",
Nov 29 06:28:34 compute-0 quizzical_shamir[132496]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 06:28:34 compute-0 quizzical_shamir[132496]:             "lv_size": "7511998464",
Nov 29 06:28:34 compute-0 quizzical_shamir[132496]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=336ec58c-893b-528f-a0c1-6ed1196bc047,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=91f280f1-e534-4adc-bf70-98711580c2dd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 06:28:34 compute-0 quizzical_shamir[132496]:             "lv_uuid": "G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP",
Nov 29 06:28:34 compute-0 quizzical_shamir[132496]:             "name": "ceph_lv0",
Nov 29 06:28:34 compute-0 quizzical_shamir[132496]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 06:28:34 compute-0 quizzical_shamir[132496]:             "tags": {
Nov 29 06:28:34 compute-0 quizzical_shamir[132496]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 06:28:34 compute-0 quizzical_shamir[132496]:                 "ceph.block_uuid": "G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP",
Nov 29 06:28:34 compute-0 quizzical_shamir[132496]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 06:28:34 compute-0 quizzical_shamir[132496]:                 "ceph.cluster_fsid": "336ec58c-893b-528f-a0c1-6ed1196bc047",
Nov 29 06:28:34 compute-0 quizzical_shamir[132496]:                 "ceph.cluster_name": "ceph",
Nov 29 06:28:34 compute-0 quizzical_shamir[132496]:                 "ceph.crush_device_class": "",
Nov 29 06:28:34 compute-0 quizzical_shamir[132496]:                 "ceph.encrypted": "0",
Nov 29 06:28:34 compute-0 quizzical_shamir[132496]:                 "ceph.osd_fsid": "91f280f1-e534-4adc-bf70-98711580c2dd",
Nov 29 06:28:34 compute-0 quizzical_shamir[132496]:                 "ceph.osd_id": "1",
Nov 29 06:28:34 compute-0 quizzical_shamir[132496]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 06:28:34 compute-0 quizzical_shamir[132496]:                 "ceph.type": "block",
Nov 29 06:28:34 compute-0 quizzical_shamir[132496]:                 "ceph.vdo": "0"
Nov 29 06:28:34 compute-0 quizzical_shamir[132496]:             },
Nov 29 06:28:34 compute-0 quizzical_shamir[132496]:             "type": "block",
Nov 29 06:28:34 compute-0 quizzical_shamir[132496]:             "vg_name": "ceph_vg0"
Nov 29 06:28:34 compute-0 quizzical_shamir[132496]:         }
Nov 29 06:28:34 compute-0 quizzical_shamir[132496]:     ]
Nov 29 06:28:34 compute-0 quizzical_shamir[132496]: }
Nov 29 06:28:35 compute-0 systemd[1]: libpod-5a7b220b6cd17f083d86d15477a98c9ffce91c7fce8306af47e4b462acab2ffc.scope: Deactivated successfully.
Nov 29 06:28:35 compute-0 podman[132687]: 2025-11-29 06:28:35.075100044 +0000 UTC m=+0.028316601 container died 5a7b220b6cd17f083d86d15477a98c9ffce91c7fce8306af47e4b462acab2ffc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_shamir, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 06:28:35 compute-0 python3.9[132686]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 06:28:35 compute-0 sudo[132682]: pam_unix(sudo:session): session closed for user root
Nov 29 06:28:35 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:28:35 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:28:35 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:28:35.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:28:35 compute-0 ceph-osd[85162]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 06:28:35 compute-0 ceph-osd[85162]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.8 total, 600.0 interval
                                           Cumulative writes: 7884 writes, 33K keys, 7884 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 7884 writes, 1451 syncs, 5.43 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 7884 writes, 33K keys, 7884 commit groups, 1.0 writes per commit group, ingest: 20.94 MB, 0.03 MB/s
                                           Interval WAL: 7884 writes, 1451 syncs, 5.43 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.22              0.00         1    0.219       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.22              0.00         1    0.219       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.22              0.00         1    0.219       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.8 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.2 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5633efb6d610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.8 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5633efb6d610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.8 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5633efb6d610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.8 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5633efb6d610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.29              0.00         1    0.294       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.29              0.00         1    0.294       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.29              0.00         1    0.294       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.8 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.3 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5633efb6d610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.8 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5633efb6d610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.8 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5633efb6d610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.8 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5633efb6d770#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.8 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5633efb6d770#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.23              0.00         1    0.228       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.23              0.00         1    0.228       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.23              0.00         1    0.228       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.8 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.2 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5633efb6d770#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.8 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5633efb6d610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.8 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5633efb6d610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 29 06:28:35 compute-0 sudo[132852]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjelghixptsyzrlzfgwlpogqksgfjtvt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397715.3238444-807-21213177959007/AnsiballZ_stat.py'
Nov 29 06:28:35 compute-0 sudo[132852]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:28:35 compute-0 python3.9[132854]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:28:35 compute-0 sudo[132852]: pam_unix(sudo:session): session closed for user root
Nov 29 06:28:36 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:28:36 compute-0 ceph-mon[74654]: pgmap v483: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:28:36 compute-0 ceph-mon[74654]: pgmap v484: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:28:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-a47128314c6321e968747d3a965ec696fecd8f155dcbbb2d58eb89a71158c353-merged.mount: Deactivated successfully.
Nov 29 06:28:36 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v485: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:28:36 compute-0 sudo[132975]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hmcumxxhssrzizhlerxybaowfjzeeqyr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397715.3238444-807-21213177959007/AnsiballZ_copy.py'
Nov 29 06:28:36 compute-0 sudo[132975]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:28:36 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:28:36 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:28:36 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:28:36.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:28:36 compute-0 podman[132687]: 2025-11-29 06:28:36.292524653 +0000 UTC m=+1.245741210 container remove 5a7b220b6cd17f083d86d15477a98c9ffce91c7fce8306af47e4b462acab2ffc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_shamir, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 06:28:36 compute-0 systemd[1]: libpod-conmon-5a7b220b6cd17f083d86d15477a98c9ffce91c7fce8306af47e4b462acab2ffc.scope: Deactivated successfully.
Nov 29 06:28:36 compute-0 sudo[132050]: pam_unix(sudo:session): session closed for user root
Nov 29 06:28:36 compute-0 sudo[132978]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:28:36 compute-0 sudo[132978]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:28:36 compute-0 sudo[132978]: pam_unix(sudo:session): session closed for user root
Nov 29 06:28:36 compute-0 sudo[133003]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:28:36 compute-0 sudo[133003]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:28:36 compute-0 sudo[133003]: pam_unix(sudo:session): session closed for user root
Nov 29 06:28:36 compute-0 python3.9[132977]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764397715.3238444-807-21213177959007/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=3385b01217fece5877d0a0cc7f45f60761b1d6d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:28:36 compute-0 sudo[132975]: pam_unix(sudo:session): session closed for user root
Nov 29 06:28:36 compute-0 sudo[133028]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:28:36 compute-0 sudo[133028]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:28:36 compute-0 sudo[133028]: pam_unix(sudo:session): session closed for user root
Nov 29 06:28:36 compute-0 sudo[133053]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -- raw list --format json
Nov 29 06:28:36 compute-0 sudo[133053]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:28:36 compute-0 podman[133199]: 2025-11-29 06:28:36.839215633 +0000 UTC m=+0.023885224 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:28:36 compute-0 sudo[133280]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wrpbbhqxxuomklwqpnbtepurzeszlwcq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397716.6893785-854-229424999836343/AnsiballZ_file.py'
Nov 29 06:28:36 compute-0 podman[133199]: 2025-11-29 06:28:36.968540133 +0000 UTC m=+0.153209714 container create abc330159b66d46268aa933a5bc88c6121559d7674c989d586fcefe964d9f5f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_diffie, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 29 06:28:36 compute-0 sudo[133280]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:28:37 compute-0 python3.9[133282]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 06:28:37 compute-0 sudo[133280]: pam_unix(sudo:session): session closed for user root
Nov 29 06:28:37 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:28:37 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:28:37 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:28:37.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:28:37 compute-0 systemd[1]: Started libpod-conmon-abc330159b66d46268aa933a5bc88c6121559d7674c989d586fcefe964d9f5f1.scope.
Nov 29 06:28:37 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:28:37 compute-0 podman[133199]: 2025-11-29 06:28:37.565452339 +0000 UTC m=+0.750121960 container init abc330159b66d46268aa933a5bc88c6121559d7674c989d586fcefe964d9f5f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_diffie, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 29 06:28:37 compute-0 podman[133199]: 2025-11-29 06:28:37.575682282 +0000 UTC m=+0.760351863 container start abc330159b66d46268aa933a5bc88c6121559d7674c989d586fcefe964d9f5f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_diffie, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 06:28:37 compute-0 unruffled_diffie[133310]: 167 167
Nov 29 06:28:37 compute-0 systemd[1]: libpod-abc330159b66d46268aa933a5bc88c6121559d7674c989d586fcefe964d9f5f1.scope: Deactivated successfully.
Nov 29 06:28:37 compute-0 sudo[133450]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wszgnjuomqfrvwkszcmducmcegjscfpx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397717.3795857-878-58498530684549/AnsiballZ_stat.py'
Nov 29 06:28:37 compute-0 sudo[133450]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:28:37 compute-0 podman[133199]: 2025-11-29 06:28:37.696917021 +0000 UTC m=+0.881586682 container attach abc330159b66d46268aa933a5bc88c6121559d7674c989d586fcefe964d9f5f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_diffie, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 06:28:37 compute-0 podman[133199]: 2025-11-29 06:28:37.697549439 +0000 UTC m=+0.882219040 container died abc330159b66d46268aa933a5bc88c6121559d7674c989d586fcefe964d9f5f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_diffie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 06:28:37 compute-0 python3.9[133452]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:28:37 compute-0 sudo[133450]: pam_unix(sudo:session): session closed for user root
Nov 29 06:28:38 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v486: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:28:38 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:28:38 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:28:38 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:28:38.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:28:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-e6a5b6479dbeb9d8fb38bd0f62918c245310c120c6c1aa6f0302970d634deb46-merged.mount: Deactivated successfully.
Nov 29 06:28:38 compute-0 sudo[133574]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iggievpxaupcikfeltjkljyiwlwizrdo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397717.3795857-878-58498530684549/AnsiballZ_copy.py'
Nov 29 06:28:38 compute-0 sudo[133574]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:28:38 compute-0 python3.9[133576]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764397717.3795857-878-58498530684549/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=3385b01217fece5877d0a0cc7f45f60761b1d6d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:28:38 compute-0 sudo[133574]: pam_unix(sudo:session): session closed for user root
Nov 29 06:28:39 compute-0 podman[133199]: 2025-11-29 06:28:39.049410454 +0000 UTC m=+2.234080035 container remove abc330159b66d46268aa933a5bc88c6121559d7674c989d586fcefe964d9f5f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_diffie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 06:28:39 compute-0 systemd[1]: libpod-conmon-abc330159b66d46268aa933a5bc88c6121559d7674c989d586fcefe964d9f5f1.scope: Deactivated successfully.
Nov 29 06:28:39 compute-0 sshd-session[133577]: Invalid user guest123 from 138.124.186.225 port 51326
Nov 29 06:28:39 compute-0 sshd-session[133577]: Received disconnect from 138.124.186.225 port 51326:11: Bye Bye [preauth]
Nov 29 06:28:39 compute-0 sshd-session[133577]: Disconnected from invalid user guest123 138.124.186.225 port 51326 [preauth]
Nov 29 06:28:39 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:28:39 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:28:39 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:28:39.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:28:39 compute-0 podman[133635]: 2025-11-29 06:28:39.251784054 +0000 UTC m=+0.059615377 container create eb15e35498b9505e0bbb2c0e794ebc19af8bb45dcadc15d26b582559667dd5df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_curie, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True)
Nov 29 06:28:39 compute-0 podman[133635]: 2025-11-29 06:28:39.217013969 +0000 UTC m=+0.024845322 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:28:39 compute-0 systemd[1]: Started libpod-conmon-eb15e35498b9505e0bbb2c0e794ebc19af8bb45dcadc15d26b582559667dd5df.scope.
Nov 29 06:28:39 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:28:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d0ced9176a00d0ad1b0192c37ead5b364e03e2e48da400fc0edeaf9a28d273d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 06:28:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d0ced9176a00d0ad1b0192c37ead5b364e03e2e48da400fc0edeaf9a28d273d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:28:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d0ced9176a00d0ad1b0192c37ead5b364e03e2e48da400fc0edeaf9a28d273d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:28:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d0ced9176a00d0ad1b0192c37ead5b364e03e2e48da400fc0edeaf9a28d273d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 06:28:39 compute-0 ceph-mon[74654]: pgmap v485: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:28:39 compute-0 sudo[133756]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ihnxmnovwjsowqzuysuqqsntnxvsebwq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397719.1553688-931-265020257286724/AnsiballZ_file.py'
Nov 29 06:28:39 compute-0 sudo[133756]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:28:39 compute-0 python3.9[133758]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 06:28:39 compute-0 sudo[133756]: pam_unix(sudo:session): session closed for user root
Nov 29 06:28:39 compute-0 podman[133635]: 2025-11-29 06:28:39.896871069 +0000 UTC m=+0.704702402 container init eb15e35498b9505e0bbb2c0e794ebc19af8bb45dcadc15d26b582559667dd5df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_curie, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 29 06:28:39 compute-0 podman[133635]: 2025-11-29 06:28:39.909715657 +0000 UTC m=+0.717546990 container start eb15e35498b9505e0bbb2c0e794ebc19af8bb45dcadc15d26b582559667dd5df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_curie, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 06:28:39 compute-0 podman[133635]: 2025-11-29 06:28:39.941706362 +0000 UTC m=+0.749537725 container attach eb15e35498b9505e0bbb2c0e794ebc19af8bb45dcadc15d26b582559667dd5df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_curie, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 06:28:40 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v487: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:28:40 compute-0 sudo[133910]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ithpdtlzajgcgmaqxfnslnlicvdanjpw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397719.9633133-955-33488274034566/AnsiballZ_stat.py'
Nov 29 06:28:40 compute-0 sudo[133910]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:28:40 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:28:40 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:28:40 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:28:40.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:28:40 compute-0 python3.9[133912]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:28:40 compute-0 sudo[133910]: pam_unix(sudo:session): session closed for user root
Nov 29 06:28:40 compute-0 quirky_curie[133727]: {
Nov 29 06:28:40 compute-0 quirky_curie[133727]:     "91f280f1-e534-4adc-bf70-98711580c2dd": {
Nov 29 06:28:40 compute-0 quirky_curie[133727]:         "ceph_fsid": "336ec58c-893b-528f-a0c1-6ed1196bc047",
Nov 29 06:28:40 compute-0 quirky_curie[133727]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 06:28:40 compute-0 quirky_curie[133727]:         "osd_id": 1,
Nov 29 06:28:40 compute-0 quirky_curie[133727]:         "osd_uuid": "91f280f1-e534-4adc-bf70-98711580c2dd",
Nov 29 06:28:40 compute-0 quirky_curie[133727]:         "type": "bluestore"
Nov 29 06:28:40 compute-0 quirky_curie[133727]:     }
Nov 29 06:28:40 compute-0 quirky_curie[133727]: }
Nov 29 06:28:40 compute-0 systemd[1]: libpod-eb15e35498b9505e0bbb2c0e794ebc19af8bb45dcadc15d26b582559667dd5df.scope: Deactivated successfully.
Nov 29 06:28:40 compute-0 podman[133635]: 2025-11-29 06:28:40.761721662 +0000 UTC m=+1.569552995 container died eb15e35498b9505e0bbb2c0e794ebc19af8bb45dcadc15d26b582559667dd5df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_curie, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 06:28:40 compute-0 ceph-mon[74654]: pgmap v486: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:28:40 compute-0 sudo[134060]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vrbqukebteuyjkacjkhxempsiihvzbkj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397719.9633133-955-33488274034566/AnsiballZ_copy.py'
Nov 29 06:28:40 compute-0 sudo[134060]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:28:40 compute-0 python3.9[134062]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764397719.9633133-955-33488274034566/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=3385b01217fece5877d0a0cc7f45f60761b1d6d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:28:41 compute-0 sudo[134060]: pam_unix(sudo:session): session closed for user root
Nov 29 06:28:41 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:28:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-6d0ced9176a00d0ad1b0192c37ead5b364e03e2e48da400fc0edeaf9a28d273d-merged.mount: Deactivated successfully.
Nov 29 06:28:41 compute-0 podman[133635]: 2025-11-29 06:28:41.150828753 +0000 UTC m=+1.958660086 container remove eb15e35498b9505e0bbb2c0e794ebc19af8bb45dcadc15d26b582559667dd5df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_curie, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 06:28:41 compute-0 systemd[1]: libpod-conmon-eb15e35498b9505e0bbb2c0e794ebc19af8bb45dcadc15d26b582559667dd5df.scope: Deactivated successfully.
Nov 29 06:28:41 compute-0 sudo[133053]: pam_unix(sudo:session): session closed for user root
Nov 29 06:28:41 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 06:28:41 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:28:41 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:28:41 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:28:41.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:28:41 compute-0 sudo[134216]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-piesgpfdgmjprsudloacpzppstkypbcx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397721.232901-1002-130960759231090/AnsiballZ_file.py'
Nov 29 06:28:41 compute-0 sudo[134216]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:28:41 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:28:41 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 06:28:41 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:28:41 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev 3b8093ca-856b-4a55-b9dd-4fce9d6c6d95 does not exist
Nov 29 06:28:41 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev 1c55fe6d-f5de-42d0-be82-9380ad626aa1 does not exist
Nov 29 06:28:41 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev 6049e527-278f-4d9e-9490-37827d4ea568 does not exist
Nov 29 06:28:41 compute-0 python3.9[134218]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 06:28:41 compute-0 sudo[134219]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:28:41 compute-0 sudo[134219]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:28:41 compute-0 sudo[134219]: pam_unix(sudo:session): session closed for user root
Nov 29 06:28:41 compute-0 sudo[134216]: pam_unix(sudo:session): session closed for user root
Nov 29 06:28:41 compute-0 sudo[134244]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 06:28:41 compute-0 sudo[134244]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:28:41 compute-0 sudo[134244]: pam_unix(sudo:session): session closed for user root
Nov 29 06:28:41 compute-0 ceph-mon[74654]: pgmap v487: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:28:41 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:28:41 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:28:42 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v488: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:28:42 compute-0 sudo[134418]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-izeyvlqhkbwfwokytlvrtzifeuplpcem ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397721.9604077-1025-75576068781504/AnsiballZ_stat.py'
Nov 29 06:28:42 compute-0 sudo[134418]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:28:42 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:28:42 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:28:42 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:28:42.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:28:42 compute-0 python3.9[134420]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:28:42 compute-0 sudo[134418]: pam_unix(sudo:session): session closed for user root
Nov 29 06:28:42 compute-0 sudo[134444]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:28:42 compute-0 sudo[134444]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:28:42 compute-0 sudo[134444]: pam_unix(sudo:session): session closed for user root
Nov 29 06:28:42 compute-0 sudo[134489]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:28:42 compute-0 sudo[134489]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:28:42 compute-0 sudo[134489]: pam_unix(sudo:session): session closed for user root
Nov 29 06:28:42 compute-0 sudo[134592]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-onefhcvzhoflnnghptqvliidgbvemtbp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397721.9604077-1025-75576068781504/AnsiballZ_copy.py'
Nov 29 06:28:42 compute-0 sudo[134592]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:28:43 compute-0 python3.9[134594]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764397721.9604077-1025-75576068781504/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=3385b01217fece5877d0a0cc7f45f60761b1d6d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:28:43 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:28:43 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:28:43 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:28:43.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:28:43 compute-0 sudo[134592]: pam_unix(sudo:session): session closed for user root
Nov 29 06:28:43 compute-0 ceph-mon[74654]: pgmap v488: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:28:43 compute-0 sudo[134744]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nlfbdtqlgtbssibxdlwygsgrqdsgnagj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397723.432342-1066-45835701008838/AnsiballZ_file.py'
Nov 29 06:28:43 compute-0 sudo[134744]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:28:43 compute-0 python3.9[134746]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 06:28:43 compute-0 sudo[134744]: pam_unix(sudo:session): session closed for user root
Nov 29 06:28:44 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v489: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:28:44 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:28:44 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:28:44 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:28:44.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:28:44 compute-0 sudo[134896]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvakghdswrtlzbxtbnthmozsbwlsjrce ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397724.1396701-1083-160700555293399/AnsiballZ_stat.py'
Nov 29 06:28:44 compute-0 sudo[134896]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:28:44 compute-0 python3.9[134898]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:28:44 compute-0 sudo[134896]: pam_unix(sudo:session): session closed for user root
Nov 29 06:28:45 compute-0 sudo[135020]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ifuxlhhcoggiezlawzrxhglisphxouyl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397724.1396701-1083-160700555293399/AnsiballZ_copy.py'
Nov 29 06:28:45 compute-0 sudo[135020]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:28:45 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:28:45 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:28:45 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:28:45.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:28:45 compute-0 python3.9[135022]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764397724.1396701-1083-160700555293399/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=3385b01217fece5877d0a0cc7f45f60761b1d6d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:28:45 compute-0 sudo[135020]: pam_unix(sudo:session): session closed for user root
Nov 29 06:28:46 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:28:46 compute-0 ceph-mon[74654]: pgmap v489: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:28:46 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v490: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:28:46 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:28:46 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:28:46 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:28:46.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:28:47 compute-0 sshd-session[127821]: Connection closed by 192.168.122.30 port 55988
Nov 29 06:28:47 compute-0 sshd-session[127818]: pam_unix(sshd:session): session closed for user zuul
Nov 29 06:28:47 compute-0 systemd[1]: session-44.scope: Deactivated successfully.
Nov 29 06:28:47 compute-0 systemd[1]: session-44.scope: Consumed 25.302s CPU time.
Nov 29 06:28:47 compute-0 systemd-logind[797]: Session 44 logged out. Waiting for processes to exit.
Nov 29 06:28:47 compute-0 systemd-logind[797]: Removed session 44.
Nov 29 06:28:47 compute-0 ceph-mon[74654]: pgmap v490: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:28:47 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:28:47 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:28:47 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:28:47.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:28:48 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v491: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:28:48 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:28:48 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:28:48 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:28:48.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:28:49 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:28:49 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:28:49 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:28:49.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:28:50 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v492: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:28:50 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:28:50 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:28:50 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:28:50.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:28:50 compute-0 ceph-mon[74654]: pgmap v491: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:28:51 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:28:51 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:28:51 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:28:51 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:28:51.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:28:52 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v493: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:28:52 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:28:52 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:28:52 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:28:52.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:28:52 compute-0 sshd-session[135050]: Accepted publickey for zuul from 192.168.122.30 port 34944 ssh2: ECDSA SHA256:q0RMlXdalxA6snNWza7TmIndlwLWLLpO+sXhiGKqO/I
Nov 29 06:28:52 compute-0 systemd-logind[797]: New session 45 of user zuul.
Nov 29 06:28:52 compute-0 systemd[1]: Started Session 45 of User zuul.
Nov 29 06:28:52 compute-0 sshd-session[135050]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 06:28:53 compute-0 ceph-mon[74654]: pgmap v492: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:28:53 compute-0 ceph-mgr[74948]: [devicehealth INFO root] Check health
Nov 29 06:28:53 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:28:53 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:28:53 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:28:53.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:28:53 compute-0 sudo[135204]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rmbmeclzujtqirkwbsjaywaxxkoxpayk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397732.7784085-31-41192773751900/AnsiballZ_file.py'
Nov 29 06:28:53 compute-0 sudo[135204]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:28:53 compute-0 python3.9[135206]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:28:53 compute-0 sudo[135204]: pam_unix(sudo:session): session closed for user root
Nov 29 06:28:54 compute-0 ceph-mon[74654]: pgmap v493: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:28:54 compute-0 ceph-mgr[74948]: [balancer INFO root] Optimize plan auto_2025-11-29_06:28:54
Nov 29 06:28:54 compute-0 ceph-mgr[74948]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 06:28:54 compute-0 ceph-mgr[74948]: [balancer INFO root] do_upmap
Nov 29 06:28:54 compute-0 ceph-mgr[74948]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.meta', 'images', 'vms', 'cephfs.cephfs.data', '.rgw.root', 'volumes', 'backups', 'default.rgw.log', '.mgr', 'default.rgw.control']
Nov 29 06:28:54 compute-0 ceph-mgr[74948]: [balancer INFO root] prepared 0/10 changes
Nov 29 06:28:54 compute-0 sudo[135356]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jgqzzmyylmjofjrrczqwdkkqsifqcwjy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397733.7867298-67-138562747292606/AnsiballZ_stat.py'
Nov 29 06:28:54 compute-0 sudo[135356]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:28:54 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v494: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:28:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:28:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:28:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:28:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:28:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:28:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:28:54 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:28:54 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:28:54 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:28:54.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:28:54 compute-0 python3.9[135358]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:28:54 compute-0 sudo[135356]: pam_unix(sudo:session): session closed for user root
Nov 29 06:28:55 compute-0 sudo[135480]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vfejxcmgcomdqcwpvbxdeducjobcshih ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397733.7867298-67-138562747292606/AnsiballZ_copy.py'
Nov 29 06:28:55 compute-0 sudo[135480]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:28:55 compute-0 python3.9[135482]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764397733.7867298-67-138562747292606/.source.conf _original_basename=ceph.conf follow=False checksum=b678e866ce48244e104f356f74865d3398155ff0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:28:55 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:28:55 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:28:55 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:28:55.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:28:55 compute-0 sudo[135480]: pam_unix(sudo:session): session closed for user root
Nov 29 06:28:55 compute-0 sudo[135632]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qblfqcvdffxetaifzazavhfcksxzxenk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397735.403503-67-60973800921858/AnsiballZ_stat.py'
Nov 29 06:28:55 compute-0 sudo[135632]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:28:56 compute-0 python3.9[135634]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:28:56 compute-0 sudo[135632]: pam_unix(sudo:session): session closed for user root
Nov 29 06:28:56 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:28:56 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v495: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:28:56 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:28:56 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:28:56 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:28:56.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:28:56 compute-0 ceph-mon[74654]: pgmap v494: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:28:56 compute-0 sudo[135757]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hsxvdroibcdliwzbbmevpyepsrlkjnbu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397735.403503-67-60973800921858/AnsiballZ_copy.py'
Nov 29 06:28:56 compute-0 sudo[135757]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:28:56 compute-0 python3.9[135759]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764397735.403503-67-60973800921858/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=d5bc1b1c0617b147c8e3e13846b179249a244079 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:28:56 compute-0 sudo[135757]: pam_unix(sudo:session): session closed for user root
Nov 29 06:28:56 compute-0 ceph-mon[74654]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #21. Immutable memtables: 0.
Nov 29 06:28:56 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:28:56.744763) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 06:28:56 compute-0 ceph-mon[74654]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 21
Nov 29 06:28:56 compute-0 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764397736744854, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 1241, "num_deletes": 253, "total_data_size": 2179703, "memory_usage": 2213976, "flush_reason": "Manual Compaction"}
Nov 29 06:28:56 compute-0 ceph-mon[74654]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #22: started
Nov 29 06:28:56 compute-0 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764397736755757, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 22, "file_size": 1349515, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8466, "largest_seqno": 9706, "table_properties": {"data_size": 1344755, "index_size": 2156, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 12190, "raw_average_key_size": 20, "raw_value_size": 1334447, "raw_average_value_size": 2246, "num_data_blocks": 99, "num_entries": 594, "num_filter_entries": 594, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764397574, "oldest_key_time": 1764397574, "file_creation_time": 1764397736, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cb6c8f8f-b3b4-4901-9b8e-6f9d7b0da908", "db_session_id": "VL4WOW4AK06DDHF5VQBP", "orig_file_number": 22, "seqno_to_time_mapping": "N/A"}}
Nov 29 06:28:56 compute-0 ceph-mon[74654]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 11030 microseconds, and 5211 cpu microseconds.
Nov 29 06:28:56 compute-0 ceph-mon[74654]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 06:28:56 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:28:56.755807) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #22: 1349515 bytes OK
Nov 29 06:28:56 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:28:56.755826) [db/memtable_list.cc:519] [default] Level-0 commit table #22 started
Nov 29 06:28:56 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:28:56.757311) [db/memtable_list.cc:722] [default] Level-0 commit table #22: memtable #1 done
Nov 29 06:28:56 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:28:56.757328) EVENT_LOG_v1 {"time_micros": 1764397736757323, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 06:28:56 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:28:56.757347) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 06:28:56 compute-0 ceph-mon[74654]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 2174171, prev total WAL file size 2174171, number of live WAL files 2.
Nov 29 06:28:56 compute-0 ceph-mon[74654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000018.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 06:28:56 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:28:56.758251) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323534' seq:0, type:0; will stop at (end)
Nov 29 06:28:56 compute-0 ceph-mon[74654]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 06:28:56 compute-0 ceph-mon[74654]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [22(1317KB)], [20(10002KB)]
Nov 29 06:28:56 compute-0 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764397736758329, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [22], "files_L6": [20], "score": -1, "input_data_size": 11591771, "oldest_snapshot_seqno": -1}
Nov 29 06:28:56 compute-0 ceph-mon[74654]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #23: 3883 keys, 9448971 bytes, temperature: kUnknown
Nov 29 06:28:56 compute-0 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764397736884640, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 23, "file_size": 9448971, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9417504, "index_size": 20669, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9733, "raw_key_size": 95323, "raw_average_key_size": 24, "raw_value_size": 9341645, "raw_average_value_size": 2405, "num_data_blocks": 911, "num_entries": 3883, "num_filter_entries": 3883, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764396963, "oldest_key_time": 0, "file_creation_time": 1764397736, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cb6c8f8f-b3b4-4901-9b8e-6f9d7b0da908", "db_session_id": "VL4WOW4AK06DDHF5VQBP", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Nov 29 06:28:56 compute-0 ceph-mon[74654]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 06:28:56 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:28:56.884964) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 9448971 bytes
Nov 29 06:28:56 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:28:56.886489) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 91.7 rd, 74.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 9.8 +0.0 blob) out(9.0 +0.0 blob), read-write-amplify(15.6) write-amplify(7.0) OK, records in: 4361, records dropped: 478 output_compression: NoCompression
Nov 29 06:28:56 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:28:56.886509) EVENT_LOG_v1 {"time_micros": 1764397736886500, "job": 6, "event": "compaction_finished", "compaction_time_micros": 126391, "compaction_time_cpu_micros": 26076, "output_level": 6, "num_output_files": 1, "total_output_size": 9448971, "num_input_records": 4361, "num_output_records": 3883, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 06:28:56 compute-0 ceph-mon[74654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000022.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 06:28:56 compute-0 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764397736886860, "job": 6, "event": "table_file_deletion", "file_number": 22}
Nov 29 06:28:56 compute-0 ceph-mon[74654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 06:28:56 compute-0 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764397736888610, "job": 6, "event": "table_file_deletion", "file_number": 20}
Nov 29 06:28:56 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:28:56.758120) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 06:28:56 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:28:56.888718) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 06:28:56 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:28:56.888734) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 06:28:56 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:28:56.888735) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 06:28:56 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:28:56.888737) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 06:28:56 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:28:56.888739) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 06:28:56 compute-0 sshd-session[135682]: Invalid user deploy from 31.6.212.12 port 48696
Nov 29 06:28:57 compute-0 sshd-session[135053]: Connection closed by 192.168.122.30 port 34944
Nov 29 06:28:57 compute-0 sshd-session[135050]: pam_unix(sshd:session): session closed for user zuul
Nov 29 06:28:57 compute-0 systemd[1]: session-45.scope: Deactivated successfully.
Nov 29 06:28:57 compute-0 systemd[1]: session-45.scope: Consumed 2.993s CPU time.
Nov 29 06:28:57 compute-0 systemd-logind[797]: Session 45 logged out. Waiting for processes to exit.
Nov 29 06:28:57 compute-0 systemd-logind[797]: Removed session 45.
Nov 29 06:28:57 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:28:57 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:28:57 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:28:57.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:28:57 compute-0 sshd-session[135682]: Received disconnect from 31.6.212.12 port 48696:11: Bye Bye [preauth]
Nov 29 06:28:57 compute-0 sshd-session[135682]: Disconnected from invalid user deploy 31.6.212.12 port 48696 [preauth]
Nov 29 06:28:58 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v496: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:28:58 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:28:58 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:28:58 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:28:58.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:28:58 compute-0 ceph-mon[74654]: pgmap v495: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:28:58 compute-0 sshd-session[135785]: Invalid user test1 from 104.208.108.166 port 28106
Nov 29 06:28:59 compute-0 sshd-session[135785]: Received disconnect from 104.208.108.166 port 28106:11: Bye Bye [preauth]
Nov 29 06:28:59 compute-0 sshd-session[135785]: Disconnected from invalid user test1 104.208.108.166 port 28106 [preauth]
Nov 29 06:28:59 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:28:59 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:28:59 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:28:59.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:29:00 compute-0 ceph-mon[74654]: pgmap v496: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:29:00 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v497: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:29:00 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:29:00 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:29:00 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:29:00.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:29:01 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:29:01 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:29:01 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:29:01.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:29:01 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:29:02 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v498: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:29:02 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:29:02 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:29:02 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:29:02.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:29:02 compute-0 ceph-mon[74654]: pgmap v497: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:29:02 compute-0 sudo[135790]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:29:02 compute-0 sudo[135790]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:29:02 compute-0 sudo[135790]: pam_unix(sudo:session): session closed for user root
Nov 29 06:29:02 compute-0 sshd-session[135789]: Accepted publickey for zuul from 192.168.122.30 port 45866 ssh2: ECDSA SHA256:q0RMlXdalxA6snNWza7TmIndlwLWLLpO+sXhiGKqO/I
Nov 29 06:29:02 compute-0 systemd-logind[797]: New session 46 of user zuul.
Nov 29 06:29:02 compute-0 systemd[1]: Started Session 46 of User zuul.
Nov 29 06:29:02 compute-0 sudo[135816]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:29:02 compute-0 sshd-session[135789]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 06:29:02 compute-0 sudo[135816]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:29:02 compute-0 sudo[135816]: pam_unix(sudo:session): session closed for user root
Nov 29 06:29:03 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:29:03 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:29:03 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:29:03.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:29:03 compute-0 python3.9[135994]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 06:29:04 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v499: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:29:04 compute-0 ceph-mon[74654]: pgmap v498: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:29:04 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:29:04 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:29:04 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:29:04.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:29:05 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:29:05 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:29:05 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:29:05.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:29:05 compute-0 sudo[136149]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-umybozkglamwadpbudmejzohruacoqqb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397744.6487024-67-266299418393113/AnsiballZ_file.py'
Nov 29 06:29:05 compute-0 sudo[136149]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:29:05 compute-0 python3.9[136151]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 06:29:05 compute-0 sudo[136149]: pam_unix(sudo:session): session closed for user root
Nov 29 06:29:05 compute-0 sudo[136303]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dbzthtsfotnsrkmslzsvebxewttagrje ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397745.6800125-67-34509419372096/AnsiballZ_file.py'
Nov 29 06:29:05 compute-0 sudo[136303]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:29:06 compute-0 python3.9[136305]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 29 06:29:06 compute-0 sudo[136303]: pam_unix(sudo:session): session closed for user root
Nov 29 06:29:06 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v500: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:29:06 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:29:06 compute-0 sshd-session[136177]: Invalid user hamed from 79.116.35.29 port 44826
Nov 29 06:29:06 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:29:06 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:29:06 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:29:06.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:29:06 compute-0 sshd-session[136177]: Received disconnect from 79.116.35.29 port 44826:11: Bye Bye [preauth]
Nov 29 06:29:06 compute-0 sshd-session[136177]: Disconnected from invalid user hamed 79.116.35.29 port 44826 [preauth]
Nov 29 06:29:06 compute-0 ceph-mon[74654]: pgmap v499: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:29:07 compute-0 python3.9[136455]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 06:29:07 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:29:07 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:29:07 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:29:07.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:29:07 compute-0 sudo[136608]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-luygflajlnjoazzmuhpczxamdcnilflc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397747.3489494-136-263710577501962/AnsiballZ_seboolean.py'
Nov 29 06:29:07 compute-0 sudo[136608]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:29:08 compute-0 python3.9[136610]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Nov 29 06:29:08 compute-0 ceph-mon[74654]: pgmap v500: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:29:08 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v501: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:29:08 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:29:08 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:29:08 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:29:08.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:29:08 compute-0 sshd-session[136487]: Received disconnect from 49.247.35.31 port 57317:11: Bye Bye [preauth]
Nov 29 06:29:08 compute-0 sshd-session[136487]: Disconnected from authenticating user root 49.247.35.31 port 57317 [preauth]
Nov 29 06:29:09 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:29:09 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:29:09 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:29:09.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:29:10 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v502: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:29:10 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:29:10 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:29:10 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:29:10.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:29:10 compute-0 sudo[136608]: pam_unix(sudo:session): session closed for user root
Nov 29 06:29:11 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:29:11 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:29:11 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:29:11.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:29:11 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:29:12 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v503: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:29:12 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:29:12 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:29:12 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:29:12.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:29:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 06:29:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:29:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 06:29:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:29:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:29:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:29:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:29:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:29:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:29:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:29:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:29:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:29:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 06:29:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:29:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:29:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:29:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Nov 29 06:29:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:29:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 06:29:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:29:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:29:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:29:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 06:29:13 compute-0 ceph-mon[74654]: pgmap v501: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:29:13 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:29:13 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:29:13 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:29:13.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:29:13 compute-0 sudo[136767]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-byfedpekswldqcdvkmlbmasyrkijsvhe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397753.4056346-166-193448667455754/AnsiballZ_setup.py'
Nov 29 06:29:13 compute-0 dbus-broker-launch[778]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Nov 29 06:29:13 compute-0 sudo[136767]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:29:14 compute-0 python3.9[136769]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 06:29:14 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v504: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:29:14 compute-0 sudo[136767]: pam_unix(sudo:session): session closed for user root
Nov 29 06:29:14 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:29:14 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:29:14 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:29:14.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:29:14 compute-0 ceph-mon[74654]: pgmap v502: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:29:14 compute-0 ceph-mon[74654]: pgmap v503: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:29:14 compute-0 sudo[136851]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bxgpqznxtdijvxihszoqcblodxdhxaor ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397753.4056346-166-193448667455754/AnsiballZ_dnf.py'
Nov 29 06:29:14 compute-0 sudo[136851]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:29:15 compute-0 python3.9[136853]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 06:29:15 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:29:15 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:29:15 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:29:15.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:29:16 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v505: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:29:16 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:29:16 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:29:16 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:29:16 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:29:16.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:29:16 compute-0 ceph-mon[74654]: pgmap v504: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:29:17 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:29:17 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:29:17 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:29:17.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:29:17 compute-0 sudo[136851]: pam_unix(sudo:session): session closed for user root
Nov 29 06:29:18 compute-0 sudo[137006]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hmjbpkesgaluzdvsyzfzxzjglsqsvhge ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397757.5012887-202-53807481779183/AnsiballZ_systemd.py'
Nov 29 06:29:18 compute-0 sudo[137006]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:29:18 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v506: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:29:18 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:29:18 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:29:18 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:29:18.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:29:18 compute-0 python3.9[137008]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 29 06:29:18 compute-0 sudo[137006]: pam_unix(sudo:session): session closed for user root
Nov 29 06:29:19 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:29:19 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:29:19 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:29:19.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:29:19 compute-0 sudo[137162]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bsmgbagktflbmwfcbunbcyphzjxkcbeq ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764397758.9510899-226-172970290202429/AnsiballZ_edpm_nftables_snippet.py'
Nov 29 06:29:19 compute-0 sudo[137162]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:29:19 compute-0 python3[137164]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks
                                             rule:
                                               proto: udp
                                               dport: 4789
                                           - rule_name: 119 neutron geneve networks
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               state: ["UNTRACKED"]
                                           - rule_name: 120 neutron geneve networks no conntrack
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               table: raw
                                               chain: OUTPUT
                                               jump: NOTRACK
                                               action: append
                                               state: []
                                           - rule_name: 121 neutron geneve networks no conntrack
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               table: raw
                                               chain: PREROUTING
                                               jump: NOTRACK
                                               action: append
                                               state: []
                                            dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Nov 29 06:29:19 compute-0 sudo[137162]: pam_unix(sudo:session): session closed for user root
Nov 29 06:29:20 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v507: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:29:20 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:29:20 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:29:20 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:29:20.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:29:20 compute-0 ceph-mon[74654]: pgmap v505: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:29:21 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:29:21 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:29:21 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:29:21.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:29:21 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:29:21 compute-0 sudo[137317]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gkwvvuofdhzimbeiadblpvnaapvgdmvp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397760.9964583-253-161992660635421/AnsiballZ_file.py'
Nov 29 06:29:21 compute-0 sudo[137317]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:29:21 compute-0 python3.9[137319]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:29:21 compute-0 sudo[137317]: pam_unix(sudo:session): session closed for user root
Nov 29 06:29:22 compute-0 sshd-session[137189]: Invalid user nginx from 103.147.159.91 port 53334
Nov 29 06:29:22 compute-0 ceph-mon[74654]: pgmap v506: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:29:22 compute-0 ceph-mon[74654]: pgmap v507: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:29:22 compute-0 sudo[137469]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uclkbqeutznldmfolyrlalczwqicbbpi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397761.6927586-277-171903968344247/AnsiballZ_stat.py'
Nov 29 06:29:22 compute-0 sudo[137469]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:29:22 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v508: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:29:22 compute-0 sshd-session[137189]: Received disconnect from 103.147.159.91 port 53334:11: Bye Bye [preauth]
Nov 29 06:29:22 compute-0 sshd-session[137189]: Disconnected from invalid user nginx 103.147.159.91 port 53334 [preauth]
Nov 29 06:29:22 compute-0 python3.9[137471]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:29:22 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:29:22 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:29:22 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:29:22.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:29:22 compute-0 sudo[137469]: pam_unix(sudo:session): session closed for user root
Nov 29 06:29:22 compute-0 sudo[137547]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rseqfrzvxaxrzwcievmriarcneflijfm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397761.6927586-277-171903968344247/AnsiballZ_file.py'
Nov 29 06:29:22 compute-0 sudo[137547]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:29:22 compute-0 python3.9[137549]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:29:22 compute-0 sudo[137547]: pam_unix(sudo:session): session closed for user root
Nov 29 06:29:22 compute-0 sudo[137550]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:29:22 compute-0 sudo[137550]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:29:22 compute-0 sudo[137550]: pam_unix(sudo:session): session closed for user root
Nov 29 06:29:22 compute-0 sudo[137595]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:29:22 compute-0 sudo[137595]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:29:22 compute-0 sudo[137595]: pam_unix(sudo:session): session closed for user root
Nov 29 06:29:23 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:29:23 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:29:23 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:29:23.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:29:23 compute-0 sudo[137752]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cvooclduitzjgszblyvoiegedcqxgcxd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397763.0461507-313-218979140358310/AnsiballZ_stat.py'
Nov 29 06:29:23 compute-0 sudo[137752]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:29:23 compute-0 python3.9[137754]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:29:23 compute-0 sudo[137752]: pam_unix(sudo:session): session closed for user root
Nov 29 06:29:23 compute-0 sshd-session[137558]: Invalid user erpnext from 176.109.67.96 port 60176
Nov 29 06:29:23 compute-0 sshd-session[137558]: Received disconnect from 176.109.67.96 port 60176:11: Bye Bye [preauth]
Nov 29 06:29:23 compute-0 sshd-session[137558]: Disconnected from invalid user erpnext 176.109.67.96 port 60176 [preauth]
Nov 29 06:29:23 compute-0 sudo[137830]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iexngohxxipcxfrtevbhiuprdroftpmn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397763.0461507-313-218979140358310/AnsiballZ_file.py'
Nov 29 06:29:23 compute-0 sudo[137830]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:29:24 compute-0 python3.9[137832]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.5p81rd5q recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:29:24 compute-0 sudo[137830]: pam_unix(sudo:session): session closed for user root
Nov 29 06:29:24 compute-0 ceph-mon[74654]: pgmap v508: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:29:24 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v509: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:29:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:29:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:29:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:29:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:29:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:29:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:29:24 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:29:24 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:29:24 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:29:24.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:29:24 compute-0 sudo[137983]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qglbafegtqfhfixawoymnwxudccztddh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397764.549157-349-19589036105558/AnsiballZ_stat.py'
Nov 29 06:29:24 compute-0 sudo[137983]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:29:25 compute-0 python3.9[137985]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:29:25 compute-0 sudo[137983]: pam_unix(sudo:session): session closed for user root
Nov 29 06:29:25 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:29:25 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:29:25 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:29:25.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:29:25 compute-0 sudo[138061]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dcslobqplimenakhdmpsyfmleczobkvx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397764.549157-349-19589036105558/AnsiballZ_file.py'
Nov 29 06:29:25 compute-0 sudo[138061]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:29:25 compute-0 python3.9[138063]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:29:25 compute-0 sudo[138061]: pam_unix(sudo:session): session closed for user root
Nov 29 06:29:26 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v510: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:29:26 compute-0 sudo[138213]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ylaoyyvqxhanwqppqikwmozumiugcadv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397765.8473444-388-163133367656033/AnsiballZ_command.py'
Nov 29 06:29:26 compute-0 sudo[138213]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:29:26 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:29:26 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:29:26 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:29:26 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:29:26.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:29:26 compute-0 python3.9[138215]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:29:26 compute-0 sudo[138213]: pam_unix(sudo:session): session closed for user root
Nov 29 06:29:27 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:29:27 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:29:27 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:29:27.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:29:27 compute-0 sudo[138367]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-izyfvzksyikuxxmkinlnbuxffuwoopiz ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764397767.4798653-412-106947239810609/AnsiballZ_edpm_nftables_from_files.py'
Nov 29 06:29:27 compute-0 sudo[138367]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:29:27 compute-0 ceph-mon[74654]: pgmap v509: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:29:28 compute-0 python3[138369]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 29 06:29:28 compute-0 sudo[138367]: pam_unix(sudo:session): session closed for user root
Nov 29 06:29:28 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v511: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:29:28 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:29:28 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:29:28 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:29:28.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:29:28 compute-0 sudo[138519]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zhlgyfbqexnkjryuvwtqnipzpurtbsfb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397768.4363558-436-75251911659796/AnsiballZ_stat.py'
Nov 29 06:29:28 compute-0 sudo[138519]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:29:29 compute-0 python3.9[138521]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:29:29 compute-0 sudo[138519]: pam_unix(sudo:session): session closed for user root
Nov 29 06:29:29 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:29:29 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:29:29 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:29:29.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:29:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 06:29:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 06:29:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 06:29:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 06:29:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 06:29:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 06:29:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 06:29:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 06:29:29 compute-0 ceph-mon[74654]: pgmap v510: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:29:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 06:29:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 06:29:29 compute-0 sudo[138647]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pzeiccpmibihdimmxcvfuvmweksiykwp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397768.4363558-436-75251911659796/AnsiballZ_copy.py'
Nov 29 06:29:29 compute-0 sudo[138647]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:29:29 compute-0 python3.9[138649]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764397768.4363558-436-75251911659796/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:29:29 compute-0 sudo[138647]: pam_unix(sudo:session): session closed for user root
Nov 29 06:29:29 compute-0 sshd-session[138616]: Received disconnect from 162.214.92.14 port 34132:11: Bye Bye [preauth]
Nov 29 06:29:29 compute-0 sshd-session[138616]: Disconnected from authenticating user root 162.214.92.14 port 34132 [preauth]
Nov 29 06:29:30 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v512: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:29:30 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:29:30 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:29:30 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:29:30.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:29:30 compute-0 sudo[138799]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-khicqvbypdtvsknkgxpnendnzecqaptx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397770.1768346-481-42492154048836/AnsiballZ_stat.py'
Nov 29 06:29:30 compute-0 sudo[138799]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:29:30 compute-0 python3.9[138801]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:29:30 compute-0 sudo[138799]: pam_unix(sudo:session): session closed for user root
Nov 29 06:29:31 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:29:31 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:29:31 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:29:31.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:29:31 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:29:31 compute-0 sudo[138925]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kvxqgqmpynmymygruutnomxxfzdmfotw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397770.1768346-481-42492154048836/AnsiballZ_copy.py'
Nov 29 06:29:31 compute-0 sudo[138925]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:29:31 compute-0 python3.9[138927]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764397770.1768346-481-42492154048836/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:29:31 compute-0 sudo[138925]: pam_unix(sudo:session): session closed for user root
Nov 29 06:29:32 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v513: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:29:32 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:29:32 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:29:32 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:29:32.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:29:32 compute-0 sudo[139077]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hsxnoxwaatowpltqsjcffqqhqccgkwnv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397772.0019581-526-199760301193895/AnsiballZ_stat.py'
Nov 29 06:29:32 compute-0 sudo[139077]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:29:32 compute-0 python3.9[139079]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:29:32 compute-0 sudo[139077]: pam_unix(sudo:session): session closed for user root
Nov 29 06:29:33 compute-0 sudo[139203]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zplrpqaerywpmownohyyrcvvbnrlysya ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397772.0019581-526-199760301193895/AnsiballZ_copy.py'
Nov 29 06:29:33 compute-0 sudo[139203]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:29:33 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:29:33 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:29:33 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:29:33.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:29:33 compute-0 ceph-mon[74654]: pgmap v511: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:29:33 compute-0 python3.9[139205]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764397772.0019581-526-199760301193895/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:29:33 compute-0 sudo[139203]: pam_unix(sudo:session): session closed for user root
Nov 29 06:29:34 compute-0 sudo[139355]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fbjluzbzlkwkcsukfymqlutjcyffxowh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397773.782511-571-243522724198203/AnsiballZ_stat.py'
Nov 29 06:29:34 compute-0 sudo[139355]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:29:34 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v514: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:29:34 compute-0 python3.9[139357]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:29:34 compute-0 sudo[139355]: pam_unix(sudo:session): session closed for user root
Nov 29 06:29:34 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:29:34 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:29:34 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:29:34.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:29:34 compute-0 sudo[139480]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wfchvetmtvlrgkubqjvnsxivkbjutalv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397773.782511-571-243522724198203/AnsiballZ_copy.py'
Nov 29 06:29:34 compute-0 sudo[139480]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:29:34 compute-0 python3.9[139482]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764397773.782511-571-243522724198203/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:29:34 compute-0 sudo[139480]: pam_unix(sudo:session): session closed for user root
Nov 29 06:29:34 compute-0 ceph-mon[74654]: pgmap v512: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:29:34 compute-0 ceph-mon[74654]: pgmap v513: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:29:35 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:29:35 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:29:35 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:29:35.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:29:35 compute-0 sudo[139633]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kepgxmyhqqndmtejgwmcvoytqxznuyws ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397775.5210905-616-267003980042778/AnsiballZ_stat.py'
Nov 29 06:29:35 compute-0 sudo[139633]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:29:36 compute-0 python3.9[139635]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:29:36 compute-0 sudo[139633]: pam_unix(sudo:session): session closed for user root
Nov 29 06:29:36 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v515: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:29:36 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:29:36 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:29:36 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:29:36 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:29:36.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:29:36 compute-0 sudo[139758]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xlkyzzttpmdbxsrgttkfziweaqrzjmly ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397775.5210905-616-267003980042778/AnsiballZ_copy.py'
Nov 29 06:29:36 compute-0 sudo[139758]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:29:36 compute-0 python3.9[139760]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764397775.5210905-616-267003980042778/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:29:36 compute-0 sudo[139758]: pam_unix(sudo:session): session closed for user root
Nov 29 06:29:37 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:29:37 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:29:37 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:29:37.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:29:37 compute-0 sudo[139911]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwuzocwehkyeqflkadvuhpqgjupnqxjg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397777.2568583-661-88725020781046/AnsiballZ_file.py'
Nov 29 06:29:37 compute-0 sudo[139911]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:29:37 compute-0 python3.9[139913]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:29:37 compute-0 sudo[139911]: pam_unix(sudo:session): session closed for user root
Nov 29 06:29:38 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v516: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:29:38 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:29:38 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:29:38 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:29:38.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:29:38 compute-0 sudo[140063]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eahqfdvwbtjybycleeyhmxmvkjvymbea ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397778.1274974-685-260275166424106/AnsiballZ_command.py'
Nov 29 06:29:38 compute-0 sudo[140063]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:29:38 compute-0 ceph-mon[74654]: pgmap v514: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:29:38 compute-0 python3.9[140065]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:29:38 compute-0 sudo[140063]: pam_unix(sudo:session): session closed for user root
Nov 29 06:29:39 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:29:39 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:29:39 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:29:39.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:29:39 compute-0 sudo[140219]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-abfkmozyxhxvxqiuxfkoguznlfctzggf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397778.967259-709-100757663989605/AnsiballZ_blockinfile.py'
Nov 29 06:29:39 compute-0 sudo[140219]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:29:39 compute-0 python3.9[140221]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:29:39 compute-0 sudo[140219]: pam_unix(sudo:session): session closed for user root
Nov 29 06:29:40 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v517: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:29:40 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:29:40 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:29:40 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:29:40.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:29:40 compute-0 sudo[140371]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hxaiepwtugefrjxomgivhvkjyzdmgeli ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397780.0900035-736-91640421930951/AnsiballZ_command.py'
Nov 29 06:29:40 compute-0 sudo[140371]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:29:40 compute-0 python3.9[140373]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:29:40 compute-0 sudo[140371]: pam_unix(sudo:session): session closed for user root
Nov 29 06:29:41 compute-0 sudo[140525]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rwxjxcaoncovnmmymclkvidrjvdccohy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397780.8668447-760-243154898489804/AnsiballZ_stat.py'
Nov 29 06:29:41 compute-0 sudo[140525]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:29:41 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:29:41 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:29:41 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:29:41.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:29:41 compute-0 python3.9[140527]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 06:29:41 compute-0 sudo[140525]: pam_unix(sudo:session): session closed for user root
Nov 29 06:29:42 compute-0 sudo[140554]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:29:42 compute-0 sudo[140554]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:29:42 compute-0 sudo[140554]: pam_unix(sudo:session): session closed for user root
Nov 29 06:29:42 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:29:42 compute-0 ceph-mon[74654]: pgmap v515: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:29:42 compute-0 ceph-mon[74654]: pgmap v516: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:29:42 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v518: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:29:42 compute-0 sudo[140579]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:29:42 compute-0 sudo[140579]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:29:42 compute-0 sudo[140579]: pam_unix(sudo:session): session closed for user root
Nov 29 06:29:42 compute-0 sudo[140627]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:29:42 compute-0 sudo[140627]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:29:42 compute-0 sudo[140627]: pam_unix(sudo:session): session closed for user root
Nov 29 06:29:42 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:29:42 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:29:42 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:29:42.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:29:42 compute-0 sudo[140683]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 06:29:42 compute-0 sudo[140683]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:29:42 compute-0 sudo[140796]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtdwfllmxphyxbuugtodrwlnrvpwzeuz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397782.3046825-784-39372286114903/AnsiballZ_command.py'
Nov 29 06:29:42 compute-0 sudo[140796]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:29:42 compute-0 sudo[140683]: pam_unix(sudo:session): session closed for user root
Nov 29 06:29:42 compute-0 python3.9[140799]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:29:42 compute-0 sudo[140796]: pam_unix(sudo:session): session closed for user root
Nov 29 06:29:43 compute-0 sudo[140820]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:29:43 compute-0 sudo[140820]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:29:43 compute-0 sudo[140820]: pam_unix(sudo:session): session closed for user root
Nov 29 06:29:43 compute-0 sudo[140869]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:29:43 compute-0 sudo[140869]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:29:43 compute-0 sudo[140869]: pam_unix(sudo:session): session closed for user root
Nov 29 06:29:43 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:29:43 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:29:43 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:29:43.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:29:43 compute-0 sudo[141019]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-olonsceaoebpuvmflyqebatlvvsyttrx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397783.2163553-808-103373233762950/AnsiballZ_file.py'
Nov 29 06:29:43 compute-0 sudo[141019]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:29:43 compute-0 sshd-session[140671]: Invalid user testing from 118.193.39.127 port 57446
Nov 29 06:29:43 compute-0 python3.9[141021]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:29:43 compute-0 sudo[141019]: pam_unix(sudo:session): session closed for user root
Nov 29 06:29:43 compute-0 sshd-session[140671]: Received disconnect from 118.193.39.127 port 57446:11: Bye Bye [preauth]
Nov 29 06:29:43 compute-0 sshd-session[140671]: Disconnected from invalid user testing 118.193.39.127 port 57446 [preauth]
Nov 29 06:29:44 compute-0 sshd-session[141022]: Invalid user stperez from 138.124.186.225 port 49136
Nov 29 06:29:44 compute-0 sshd-session[141022]: Received disconnect from 138.124.186.225 port 49136:11: Bye Bye [preauth]
Nov 29 06:29:44 compute-0 sshd-session[141022]: Disconnected from invalid user stperez 138.124.186.225 port 49136 [preauth]
Nov 29 06:29:44 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v519: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:29:44 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:29:44 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:29:44 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:29:44.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:29:44 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 06:29:44 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Nov 29 06:29:44 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 29 06:29:44 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 29 06:29:44 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 06:29:45 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:29:45 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:29:45 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:29:45.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:29:45 compute-0 ceph-mon[74654]: pgmap v517: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:29:45 compute-0 ceph-mon[74654]: pgmap v518: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:29:45 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 29 06:29:45 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 06:29:45 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:29:45 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 06:29:46 compute-0 python3.9[141174]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 06:29:46 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v520: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:29:46 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:29:46 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:29:46 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:29:46.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:29:46 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]': finished
Nov 29 06:29:46 compute-0 sshd[1008]: Timeout before authentication for connection from 58.210.98.130 to 38.102.83.22, pid = 126726
Nov 29 06:29:46 compute-0 ceph-mon[74654]: pgmap v519: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:29:46 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:29:46 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:29:47 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:29:47 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:29:47 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:29:47 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:29:47.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:29:47 compute-0 ceph-mon[74654]: pgmap v520: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:29:47 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd='[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]': finished
Nov 29 06:29:47 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:29:48 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v521: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:29:48 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:29:48 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:29:48 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:29:48.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:29:49 compute-0 sudo[141327]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lgxxteeagxblsdibhprxlcsnwlvklgdy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397788.9555478-928-13410174835985/AnsiballZ_command.py'
Nov 29 06:29:49 compute-0 sudo[141327]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:29:49 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:29:49 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:29:49 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:29:49.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:29:49 compute-0 ceph-mon[74654]: pgmap v521: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:29:49 compute-0 python3.9[141329]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:0e:0a:c6:22:5a:f7" external_ids:ovn-encap-ip=172.19.0.101 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch 
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:29:49 compute-0 ovs-vsctl[141330]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:0e:0a:c6:22:5a:f7 external_ids:ovn-encap-ip=172.19.0.101 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Nov 29 06:29:49 compute-0 sudo[141327]: pam_unix(sudo:session): session closed for user root
Nov 29 06:29:50 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v522: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:29:50 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:29:50 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:29:50 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:29:50.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:29:50 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 06:29:50 compute-0 sudo[141480]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bsrjyhxgtopikrdmggurmngkehndfnhk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397790.086005-955-257687989272718/AnsiballZ_command.py'
Nov 29 06:29:50 compute-0 sudo[141480]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:29:50 compute-0 python3.9[141482]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ovs-vsctl show | grep -q "Manager"
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:29:51 compute-0 sudo[141480]: pam_unix(sudo:session): session closed for user root
Nov 29 06:29:51 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:29:51 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:29:51 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:29:51.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:29:51 compute-0 ceph-mon[74654]: pgmap v522: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:29:52 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:29:52 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 06:29:52 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v523: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:29:52 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:29:52 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:29:52 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:29:52 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:29:52 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:29:52.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:29:53 compute-0 sudo[141637]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lodafibinlprnvlsdwbalxtcsbjgnxfr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397792.7008018-979-94380058426223/AnsiballZ_command.py'
Nov 29 06:29:53 compute-0 sudo[141637]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:29:53 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 06:29:53 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:29:53 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 06:29:53 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 06:29:53 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 06:29:53 compute-0 python3.9[141639]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:29:53 compute-0 ovs-vsctl[141640]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Nov 29 06:29:53 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:29:53 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:29:53 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:29:53.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:29:53 compute-0 sudo[141637]: pam_unix(sudo:session): session closed for user root
Nov 29 06:29:53 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:29:53 compute-0 ceph-mon[74654]: pgmap v523: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:29:53 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:29:53 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:29:53 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev 43d6d224-c1a4-4915-9418-38207f6d58d5 does not exist
Nov 29 06:29:53 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev 56a11c13-4450-4733-b2ec-f83b649753b2 does not exist
Nov 29 06:29:53 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev 8c672ff4-9a95-45e6-9aae-6688cf9b4e0a does not exist
Nov 29 06:29:53 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 06:29:53 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 06:29:53 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 06:29:53 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 06:29:53 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 06:29:53 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:29:53 compute-0 sudo[141741]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:29:53 compute-0 sudo[141741]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:29:53 compute-0 sudo[141741]: pam_unix(sudo:session): session closed for user root
Nov 29 06:29:54 compute-0 sudo[141792]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:29:54 compute-0 sudo[141792]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:29:54 compute-0 sudo[141792]: pam_unix(sudo:session): session closed for user root
Nov 29 06:29:54 compute-0 sudo[141841]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:29:54 compute-0 sudo[141841]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:29:54 compute-0 sudo[141841]: pam_unix(sudo:session): session closed for user root
Nov 29 06:29:54 compute-0 sudo[141866]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Nov 29 06:29:54 compute-0 sudo[141866]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:29:54 compute-0 python3.9[141838]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 06:29:54 compute-0 ceph-mgr[74948]: [balancer INFO root] Optimize plan auto_2025-11-29_06:29:54
Nov 29 06:29:54 compute-0 ceph-mgr[74948]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 06:29:54 compute-0 ceph-mgr[74948]: [balancer INFO root] do_upmap
Nov 29 06:29:54 compute-0 ceph-mgr[74948]: [balancer INFO root] pools ['default.rgw.log', 'backups', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.mgr', 'images', '.rgw.root', 'volumes', 'vms', 'default.rgw.control', 'default.rgw.meta']
Nov 29 06:29:54 compute-0 ceph-mgr[74948]: [balancer INFO root] prepared 0/10 changes
Nov 29 06:29:54 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v524: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:29:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:29:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:29:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:29:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:29:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:29:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:29:54 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:29:54 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 06:29:54 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:29:54 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 06:29:54 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 06:29:54 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:29:54 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:29:54 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:29:54 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:29:54.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:29:54 compute-0 podman[141953]: 2025-11-29 06:29:54.478566452 +0000 UTC m=+0.050458234 container create b6b4d5b8fc6eba9331271088023407607fa247e3553ee9f80b76a692ad973480 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_shannon, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 06:29:54 compute-0 podman[141953]: 2025-11-29 06:29:54.455868799 +0000 UTC m=+0.027760601 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:29:54 compute-0 sudo[142092]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tcvvmbxsqefgaguilsdnrrexiwsiujwa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397794.5642102-1030-225040967548714/AnsiballZ_file.py'
Nov 29 06:29:54 compute-0 sudo[142092]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:29:54 compute-0 systemd[1]: Started libpod-conmon-b6b4d5b8fc6eba9331271088023407607fa247e3553ee9f80b76a692ad973480.scope.
Nov 29 06:29:54 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:29:55 compute-0 python3.9[142095]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 06:29:55 compute-0 sudo[142092]: pam_unix(sudo:session): session closed for user root
Nov 29 06:29:55 compute-0 podman[141953]: 2025-11-29 06:29:55.203315191 +0000 UTC m=+0.775206973 container init b6b4d5b8fc6eba9331271088023407607fa247e3553ee9f80b76a692ad973480 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_shannon, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 29 06:29:55 compute-0 podman[141953]: 2025-11-29 06:29:55.21024772 +0000 UTC m=+0.782139502 container start b6b4d5b8fc6eba9331271088023407607fa247e3553ee9f80b76a692ad973480 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_shannon, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 06:29:55 compute-0 systemd[1]: libpod-b6b4d5b8fc6eba9331271088023407607fa247e3553ee9f80b76a692ad973480.scope: Deactivated successfully.
Nov 29 06:29:55 compute-0 clever_shannon[142098]: 167 167
Nov 29 06:29:55 compute-0 conmon[142098]: conmon b6b4d5b8fc6eba933127 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b6b4d5b8fc6eba9331271088023407607fa247e3553ee9f80b76a692ad973480.scope/container/memory.events
Nov 29 06:29:55 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:29:55 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:29:55 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:29:55.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:29:55 compute-0 podman[141953]: 2025-11-29 06:29:55.333475098 +0000 UTC m=+0.905366880 container attach b6b4d5b8fc6eba9331271088023407607fa247e3553ee9f80b76a692ad973480 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_shannon, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 29 06:29:55 compute-0 podman[141953]: 2025-11-29 06:29:55.334384775 +0000 UTC m=+0.906276567 container died b6b4d5b8fc6eba9331271088023407607fa247e3553ee9f80b76a692ad973480 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_shannon, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default)
Nov 29 06:29:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-6121f801e29c87226b4e0930563eb2700a7c97afbd730d5ec5c2c9abe2dcd983-merged.mount: Deactivated successfully.
Nov 29 06:29:55 compute-0 podman[141953]: 2025-11-29 06:29:55.502032692 +0000 UTC m=+1.073924474 container remove b6b4d5b8fc6eba9331271088023407607fa247e3553ee9f80b76a692ad973480 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_shannon, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 06:29:55 compute-0 systemd[1]: libpod-conmon-b6b4d5b8fc6eba9331271088023407607fa247e3553ee9f80b76a692ad973480.scope: Deactivated successfully.
Nov 29 06:29:55 compute-0 ceph-mon[74654]: pgmap v524: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:29:55 compute-0 podman[142229]: 2025-11-29 06:29:55.652316349 +0000 UTC m=+0.036629836 container create bb8f7561ae3ee158e5d951d0c7164c625385ee355fe1cdd8bf43dd210fdeb742 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_khayyam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 29 06:29:55 compute-0 systemd[1]: Started libpod-conmon-bb8f7561ae3ee158e5d951d0c7164c625385ee355fe1cdd8bf43dd210fdeb742.scope.
Nov 29 06:29:55 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:29:55 compute-0 sudo[142290]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mvqobmtbhilpuuthfczxgasqamucpcfg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397795.3733807-1054-182225218346261/AnsiballZ_stat.py'
Nov 29 06:29:55 compute-0 sudo[142290]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:29:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/608006990b870a87c1b1886867c9f5d0f78e1d427a27551247aecf51b629c2a4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 06:29:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/608006990b870a87c1b1886867c9f5d0f78e1d427a27551247aecf51b629c2a4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:29:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/608006990b870a87c1b1886867c9f5d0f78e1d427a27551247aecf51b629c2a4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:29:55 compute-0 podman[142229]: 2025-11-29 06:29:55.637111471 +0000 UTC m=+0.021424978 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:29:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/608006990b870a87c1b1886867c9f5d0f78e1d427a27551247aecf51b629c2a4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 06:29:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/608006990b870a87c1b1886867c9f5d0f78e1d427a27551247aecf51b629c2a4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 06:29:55 compute-0 podman[142229]: 2025-11-29 06:29:55.749069715 +0000 UTC m=+0.133383232 container init bb8f7561ae3ee158e5d951d0c7164c625385ee355fe1cdd8bf43dd210fdeb742 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_khayyam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 06:29:55 compute-0 podman[142229]: 2025-11-29 06:29:55.76035909 +0000 UTC m=+0.144672597 container start bb8f7561ae3ee158e5d951d0c7164c625385ee355fe1cdd8bf43dd210fdeb742 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_khayyam, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 06:29:55 compute-0 podman[142229]: 2025-11-29 06:29:55.764922581 +0000 UTC m=+0.149236068 container attach bb8f7561ae3ee158e5d951d0c7164c625385ee355fe1cdd8bf43dd210fdeb742 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_khayyam, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 29 06:29:55 compute-0 python3.9[142292]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:29:55 compute-0 sudo[142290]: pam_unix(sudo:session): session closed for user root
Nov 29 06:29:56 compute-0 sudo[142370]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xchsllvovunsmavmbaxumpzcpzywgwun ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397795.3733807-1054-182225218346261/AnsiballZ_file.py'
Nov 29 06:29:56 compute-0 sudo[142370]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:29:56 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v525: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:29:56 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:29:56 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:29:56 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:29:56.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:29:56 compute-0 python3.9[142372]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 06:29:56 compute-0 sudo[142370]: pam_unix(sudo:session): session closed for user root
Nov 29 06:29:56 compute-0 admiring_khayyam[142276]: --> passed data devices: 0 physical, 1 LVM
Nov 29 06:29:56 compute-0 admiring_khayyam[142276]: --> relative data size: 1.0
Nov 29 06:29:56 compute-0 admiring_khayyam[142276]: --> All data devices are unavailable
Nov 29 06:29:56 compute-0 systemd[1]: libpod-bb8f7561ae3ee158e5d951d0c7164c625385ee355fe1cdd8bf43dd210fdeb742.scope: Deactivated successfully.
Nov 29 06:29:56 compute-0 podman[142229]: 2025-11-29 06:29:56.631437441 +0000 UTC m=+1.015750928 container died bb8f7561ae3ee158e5d951d0c7164c625385ee355fe1cdd8bf43dd210fdeb742 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_khayyam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 29 06:29:57 compute-0 sudo[142544]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dtgniyelgqnjwrocigtbkmjrdlamgtcl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397796.7050943-1054-139342425169528/AnsiballZ_stat.py'
Nov 29 06:29:57 compute-0 sudo[142544]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:29:57 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:29:57 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:29:57 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:29:57.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:29:57 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:29:57 compute-0 python3.9[142546]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:29:57 compute-0 sudo[142544]: pam_unix(sudo:session): session closed for user root
Nov 29 06:29:57 compute-0 sudo[142622]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjlkpzylxrsbcdphobrwwwdeuogeqlgh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397796.7050943-1054-139342425169528/AnsiballZ_file.py'
Nov 29 06:29:57 compute-0 sudo[142622]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:29:57 compute-0 python3.9[142624]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 06:29:57 compute-0 sudo[142622]: pam_unix(sudo:session): session closed for user root
Nov 29 06:29:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-608006990b870a87c1b1886867c9f5d0f78e1d427a27551247aecf51b629c2a4-merged.mount: Deactivated successfully.
Nov 29 06:29:58 compute-0 podman[142229]: 2025-11-29 06:29:58.008924644 +0000 UTC m=+2.393238171 container remove bb8f7561ae3ee158e5d951d0c7164c625385ee355fe1cdd8bf43dd210fdeb742 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_khayyam, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 29 06:29:58 compute-0 systemd[1]: libpod-conmon-bb8f7561ae3ee158e5d951d0c7164c625385ee355fe1cdd8bf43dd210fdeb742.scope: Deactivated successfully.
Nov 29 06:29:58 compute-0 sudo[141866]: pam_unix(sudo:session): session closed for user root
Nov 29 06:29:58 compute-0 sudo[142652]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:29:58 compute-0 sudo[142652]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:29:58 compute-0 sudo[142652]: pam_unix(sudo:session): session closed for user root
Nov 29 06:29:58 compute-0 sudo[142701]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:29:58 compute-0 sudo[142701]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:29:58 compute-0 sudo[142701]: pam_unix(sudo:session): session closed for user root
Nov 29 06:29:58 compute-0 sudo[142754]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:29:58 compute-0 sudo[142754]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:29:58 compute-0 sudo[142754]: pam_unix(sudo:session): session closed for user root
Nov 29 06:29:58 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v526: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:29:58 compute-0 sudo[142801]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -- lvm list --format json
Nov 29 06:29:58 compute-0 sudo[142801]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:29:58 compute-0 sudo[142877]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-watrgzwahavbrlwkqxmyufexhzhdedsm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397798.1150331-1123-7171418948583/AnsiballZ_file.py'
Nov 29 06:29:58 compute-0 sudo[142877]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:29:58 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:29:58 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:29:58 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:29:58.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:29:58 compute-0 python3.9[142881]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:29:58 compute-0 podman[142921]: 2025-11-29 06:29:58.608847178 +0000 UTC m=+0.055355965 container create 7ed2b690d2ac943378a128a1b10f7baf62b69f80d852aec8b15ca508355eeeba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_bohr, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 29 06:29:58 compute-0 sudo[142877]: pam_unix(sudo:session): session closed for user root
Nov 29 06:29:58 compute-0 systemd[1]: Started libpod-conmon-7ed2b690d2ac943378a128a1b10f7baf62b69f80d852aec8b15ca508355eeeba.scope.
Nov 29 06:29:58 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:29:58 compute-0 podman[142921]: 2025-11-29 06:29:58.575823277 +0000 UTC m=+0.022332144 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:29:58 compute-0 podman[142921]: 2025-11-29 06:29:58.687622926 +0000 UTC m=+0.134131733 container init 7ed2b690d2ac943378a128a1b10f7baf62b69f80d852aec8b15ca508355eeeba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_bohr, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 06:29:58 compute-0 podman[142921]: 2025-11-29 06:29:58.696756839 +0000 UTC m=+0.143265616 container start 7ed2b690d2ac943378a128a1b10f7baf62b69f80d852aec8b15ca508355eeeba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_bohr, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 29 06:29:58 compute-0 podman[142921]: 2025-11-29 06:29:58.699740655 +0000 UTC m=+0.146249432 container attach 7ed2b690d2ac943378a128a1b10f7baf62b69f80d852aec8b15ca508355eeeba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_bohr, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True)
Nov 29 06:29:58 compute-0 friendly_bohr[142940]: 167 167
Nov 29 06:29:58 compute-0 systemd[1]: libpod-7ed2b690d2ac943378a128a1b10f7baf62b69f80d852aec8b15ca508355eeeba.scope: Deactivated successfully.
Nov 29 06:29:58 compute-0 podman[142921]: 2025-11-29 06:29:58.701701252 +0000 UTC m=+0.148210029 container died 7ed2b690d2ac943378a128a1b10f7baf62b69f80d852aec8b15ca508355eeeba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_bohr, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 29 06:29:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-5aace8dda0a2d1d71217228568b68648eb98b9d3f3540dac9a061642f1b95136-merged.mount: Deactivated successfully.
Nov 29 06:29:58 compute-0 podman[142921]: 2025-11-29 06:29:58.944289667 +0000 UTC m=+0.390798444 container remove 7ed2b690d2ac943378a128a1b10f7baf62b69f80d852aec8b15ca508355eeeba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_bohr, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 06:29:58 compute-0 systemd[1]: libpod-conmon-7ed2b690d2ac943378a128a1b10f7baf62b69f80d852aec8b15ca508355eeeba.scope: Deactivated successfully.
Nov 29 06:29:59 compute-0 podman[143007]: 2025-11-29 06:29:59.132326011 +0000 UTC m=+0.073359383 container create d50cdd39f7b62695ecb8ff8f5b2c3655be69d68c0884e8f3eb23812945e54c4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_shockley, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 29 06:29:59 compute-0 podman[143007]: 2025-11-29 06:29:59.085542984 +0000 UTC m=+0.026576366 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:29:59 compute-0 systemd[1]: Started libpod-conmon-d50cdd39f7b62695ecb8ff8f5b2c3655be69d68c0884e8f3eb23812945e54c4f.scope.
Nov 29 06:29:59 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:29:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/394928abc5a97045dd2219a2d8b9acd23c089b8efbb42e981085caeaee0071e3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 06:29:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/394928abc5a97045dd2219a2d8b9acd23c089b8efbb42e981085caeaee0071e3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:29:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/394928abc5a97045dd2219a2d8b9acd23c089b8efbb42e981085caeaee0071e3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:29:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/394928abc5a97045dd2219a2d8b9acd23c089b8efbb42e981085caeaee0071e3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 06:29:59 compute-0 podman[143007]: 2025-11-29 06:29:59.245583742 +0000 UTC m=+0.186617194 container init d50cdd39f7b62695ecb8ff8f5b2c3655be69d68c0884e8f3eb23812945e54c4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_shockley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 06:29:59 compute-0 podman[143007]: 2025-11-29 06:29:59.254428447 +0000 UTC m=+0.195461809 container start d50cdd39f7b62695ecb8ff8f5b2c3655be69d68c0884e8f3eb23812945e54c4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_shockley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 29 06:29:59 compute-0 podman[143007]: 2025-11-29 06:29:59.26218544 +0000 UTC m=+0.203218842 container attach d50cdd39f7b62695ecb8ff8f5b2c3655be69d68c0884e8f3eb23812945e54c4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_shockley, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 29 06:29:59 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:29:59 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:29:59 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:29:59.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:29:59 compute-0 ceph-mon[74654]: pgmap v525: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:29:59 compute-0 sudo[143131]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skhlnwgtxvymmorwcqislvtzdhnfowwm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397799.130747-1147-214581630436569/AnsiballZ_stat.py'
Nov 29 06:29:59 compute-0 sudo[143131]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:29:59 compute-0 python3.9[143133]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:29:59 compute-0 sudo[143131]: pam_unix(sudo:session): session closed for user root
Nov 29 06:30:00 compute-0 ceph-mon[74654]: log_channel(cluster) log [INF] : overall HEALTH_OK
Nov 29 06:30:00 compute-0 sudo[143213]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hgwqszyhkftlhqqstwrmisvivyqresjy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397799.130747-1147-214581630436569/AnsiballZ_file.py'
Nov 29 06:30:00 compute-0 sudo[143213]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:30:00 compute-0 gallant_shockley[143024]: {
Nov 29 06:30:00 compute-0 gallant_shockley[143024]:     "1": [
Nov 29 06:30:00 compute-0 gallant_shockley[143024]:         {
Nov 29 06:30:00 compute-0 gallant_shockley[143024]:             "devices": [
Nov 29 06:30:00 compute-0 gallant_shockley[143024]:                 "/dev/loop3"
Nov 29 06:30:00 compute-0 gallant_shockley[143024]:             ],
Nov 29 06:30:00 compute-0 gallant_shockley[143024]:             "lv_name": "ceph_lv0",
Nov 29 06:30:00 compute-0 gallant_shockley[143024]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 06:30:00 compute-0 gallant_shockley[143024]:             "lv_size": "7511998464",
Nov 29 06:30:00 compute-0 gallant_shockley[143024]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=336ec58c-893b-528f-a0c1-6ed1196bc047,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=91f280f1-e534-4adc-bf70-98711580c2dd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 06:30:00 compute-0 gallant_shockley[143024]:             "lv_uuid": "G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP",
Nov 29 06:30:00 compute-0 gallant_shockley[143024]:             "name": "ceph_lv0",
Nov 29 06:30:00 compute-0 gallant_shockley[143024]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 06:30:00 compute-0 gallant_shockley[143024]:             "tags": {
Nov 29 06:30:00 compute-0 gallant_shockley[143024]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 06:30:00 compute-0 gallant_shockley[143024]:                 "ceph.block_uuid": "G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP",
Nov 29 06:30:00 compute-0 gallant_shockley[143024]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 06:30:00 compute-0 gallant_shockley[143024]:                 "ceph.cluster_fsid": "336ec58c-893b-528f-a0c1-6ed1196bc047",
Nov 29 06:30:00 compute-0 gallant_shockley[143024]:                 "ceph.cluster_name": "ceph",
Nov 29 06:30:00 compute-0 gallant_shockley[143024]:                 "ceph.crush_device_class": "",
Nov 29 06:30:00 compute-0 gallant_shockley[143024]:                 "ceph.encrypted": "0",
Nov 29 06:30:00 compute-0 gallant_shockley[143024]:                 "ceph.osd_fsid": "91f280f1-e534-4adc-bf70-98711580c2dd",
Nov 29 06:30:00 compute-0 gallant_shockley[143024]:                 "ceph.osd_id": "1",
Nov 29 06:30:00 compute-0 gallant_shockley[143024]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 06:30:00 compute-0 gallant_shockley[143024]:                 "ceph.type": "block",
Nov 29 06:30:00 compute-0 gallant_shockley[143024]:                 "ceph.vdo": "0"
Nov 29 06:30:00 compute-0 gallant_shockley[143024]:             },
Nov 29 06:30:00 compute-0 gallant_shockley[143024]:             "type": "block",
Nov 29 06:30:00 compute-0 gallant_shockley[143024]:             "vg_name": "ceph_vg0"
Nov 29 06:30:00 compute-0 gallant_shockley[143024]:         }
Nov 29 06:30:00 compute-0 gallant_shockley[143024]:     ]
Nov 29 06:30:00 compute-0 gallant_shockley[143024]: }
Nov 29 06:30:00 compute-0 systemd[1]: libpod-d50cdd39f7b62695ecb8ff8f5b2c3655be69d68c0884e8f3eb23812945e54c4f.scope: Deactivated successfully.
Nov 29 06:30:00 compute-0 podman[143007]: 2025-11-29 06:30:00.105269125 +0000 UTC m=+1.046302497 container died d50cdd39f7b62695ecb8ff8f5b2c3655be69d68c0884e8f3eb23812945e54c4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_shockley, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 06:30:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-394928abc5a97045dd2219a2d8b9acd23c089b8efbb42e981085caeaee0071e3-merged.mount: Deactivated successfully.
Nov 29 06:30:00 compute-0 podman[143007]: 2025-11-29 06:30:00.175990151 +0000 UTC m=+1.117023523 container remove d50cdd39f7b62695ecb8ff8f5b2c3655be69d68c0884e8f3eb23812945e54c4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_shockley, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 29 06:30:00 compute-0 systemd[1]: libpod-conmon-d50cdd39f7b62695ecb8ff8f5b2c3655be69d68c0884e8f3eb23812945e54c4f.scope: Deactivated successfully.
Nov 29 06:30:00 compute-0 sudo[142801]: pam_unix(sudo:session): session closed for user root
Nov 29 06:30:00 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v527: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:30:00 compute-0 sudo[143228]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:30:00 compute-0 sudo[143228]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:30:00 compute-0 sudo[143228]: pam_unix(sudo:session): session closed for user root
Nov 29 06:30:00 compute-0 python3.9[143215]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:30:00 compute-0 sudo[143213]: pam_unix(sudo:session): session closed for user root
Nov 29 06:30:00 compute-0 sudo[143253]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:30:00 compute-0 sudo[143253]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:30:00 compute-0 sudo[143253]: pam_unix(sudo:session): session closed for user root
Nov 29 06:30:00 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:30:00 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:30:00 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:30:00.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:30:00 compute-0 sudo[143280]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:30:00 compute-0 sudo[143280]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:30:00 compute-0 sudo[143280]: pam_unix(sudo:session): session closed for user root
Nov 29 06:30:00 compute-0 sudo[143327]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -- raw list --format json
Nov 29 06:30:00 compute-0 sudo[143327]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:30:00 compute-0 podman[143483]: 2025-11-29 06:30:00.865500355 +0000 UTC m=+0.064962882 container create 70d3553ade4de74b1c8566bb612cb13694850828b9abe85ecc28bc7eccf3ee77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_rosalind, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 29 06:30:00 compute-0 sudo[143529]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kljycxospgcrywtnmozvqxwikthsqyma ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397800.5370016-1183-212385907299336/AnsiballZ_stat.py'
Nov 29 06:30:00 compute-0 sudo[143529]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:30:00 compute-0 podman[143483]: 2025-11-29 06:30:00.824330639 +0000 UTC m=+0.023793156 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:30:00 compute-0 systemd[1]: Started libpod-conmon-70d3553ade4de74b1c8566bb612cb13694850828b9abe85ecc28bc7eccf3ee77.scope.
Nov 29 06:30:01 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:30:01 compute-0 python3.9[143532]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:30:01 compute-0 podman[143483]: 2025-11-29 06:30:01.103328593 +0000 UTC m=+0.302791180 container init 70d3553ade4de74b1c8566bb612cb13694850828b9abe85ecc28bc7eccf3ee77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_rosalind, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 29 06:30:01 compute-0 sudo[143529]: pam_unix(sudo:session): session closed for user root
Nov 29 06:30:01 compute-0 podman[143483]: 2025-11-29 06:30:01.11157306 +0000 UTC m=+0.311035597 container start 70d3553ade4de74b1c8566bb612cb13694850828b9abe85ecc28bc7eccf3ee77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_rosalind, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 29 06:30:01 compute-0 confident_rosalind[143535]: 167 167
Nov 29 06:30:01 compute-0 systemd[1]: libpod-70d3553ade4de74b1c8566bb612cb13694850828b9abe85ecc28bc7eccf3ee77.scope: Deactivated successfully.
Nov 29 06:30:01 compute-0 podman[143483]: 2025-11-29 06:30:01.181945026 +0000 UTC m=+0.381407523 container attach 70d3553ade4de74b1c8566bb612cb13694850828b9abe85ecc28bc7eccf3ee77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_rosalind, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 29 06:30:01 compute-0 podman[143483]: 2025-11-29 06:30:01.18312777 +0000 UTC m=+0.382590297 container died 70d3553ade4de74b1c8566bb612cb13694850828b9abe85ecc28bc7eccf3ee77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_rosalind, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 06:30:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-bf0e4ebdb006d716dfee6b68ee2c8f5e57a3f51f5e49348803512bc104c3c967-merged.mount: Deactivated successfully.
Nov 29 06:30:01 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:30:01 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:30:01 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:30:01.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:30:01 compute-0 podman[143483]: 2025-11-29 06:30:01.323407019 +0000 UTC m=+0.522869516 container remove 70d3553ade4de74b1c8566bb612cb13694850828b9abe85ecc28bc7eccf3ee77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_rosalind, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 29 06:30:01 compute-0 systemd[1]: libpod-conmon-70d3553ade4de74b1c8566bb612cb13694850828b9abe85ecc28bc7eccf3ee77.scope: Deactivated successfully.
Nov 29 06:30:01 compute-0 sudo[143629]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pkmwahlowtwrllemgyasfkzfvppgxxcs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397800.5370016-1183-212385907299336/AnsiballZ_file.py'
Nov 29 06:30:01 compute-0 sudo[143629]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:30:01 compute-0 podman[143637]: 2025-11-29 06:30:01.511863856 +0000 UTC m=+0.048451136 container create e42c78a83699120790e930044ab02970fa4072b40ed4e370ef90a1e78c1ed642 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_torvalds, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 29 06:30:01 compute-0 systemd[1]: Started libpod-conmon-e42c78a83699120790e930044ab02970fa4072b40ed4e370ef90a1e78c1ed642.scope.
Nov 29 06:30:01 compute-0 ceph-mon[74654]: pgmap v526: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:30:01 compute-0 ceph-mon[74654]: overall HEALTH_OK
Nov 29 06:30:01 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:30:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f0601f0953f12c833d3e8421b3a883694f9f33fe8344b525b82dc31569bd96c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 06:30:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f0601f0953f12c833d3e8421b3a883694f9f33fe8344b525b82dc31569bd96c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:30:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f0601f0953f12c833d3e8421b3a883694f9f33fe8344b525b82dc31569bd96c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:30:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f0601f0953f12c833d3e8421b3a883694f9f33fe8344b525b82dc31569bd96c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 06:30:01 compute-0 podman[143637]: 2025-11-29 06:30:01.49117954 +0000 UTC m=+0.027766820 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:30:01 compute-0 podman[143637]: 2025-11-29 06:30:01.593104775 +0000 UTC m=+0.129692065 container init e42c78a83699120790e930044ab02970fa4072b40ed4e370ef90a1e78c1ed642 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_torvalds, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 29 06:30:01 compute-0 podman[143637]: 2025-11-29 06:30:01.6019602 +0000 UTC m=+0.138547460 container start e42c78a83699120790e930044ab02970fa4072b40ed4e370ef90a1e78c1ed642 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_torvalds, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 06:30:01 compute-0 podman[143637]: 2025-11-29 06:30:01.606181812 +0000 UTC m=+0.142769132 container attach e42c78a83699120790e930044ab02970fa4072b40ed4e370ef90a1e78c1ed642 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_torvalds, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 06:30:01 compute-0 python3.9[143631]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:30:01 compute-0 sudo[143629]: pam_unix(sudo:session): session closed for user root
Nov 29 06:30:02 compute-0 sudo[143809]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mnzrnmzxkjxadnhctkqmkrmwfzlvdosi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397801.8301876-1219-142428598265563/AnsiballZ_systemd.py'
Nov 29 06:30:02 compute-0 sudo[143809]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:30:02 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v528: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:30:02 compute-0 suspicious_torvalds[143655]: {
Nov 29 06:30:02 compute-0 suspicious_torvalds[143655]:     "91f280f1-e534-4adc-bf70-98711580c2dd": {
Nov 29 06:30:02 compute-0 suspicious_torvalds[143655]:         "ceph_fsid": "336ec58c-893b-528f-a0c1-6ed1196bc047",
Nov 29 06:30:02 compute-0 suspicious_torvalds[143655]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 06:30:02 compute-0 suspicious_torvalds[143655]:         "osd_id": 1,
Nov 29 06:30:02 compute-0 suspicious_torvalds[143655]:         "osd_uuid": "91f280f1-e534-4adc-bf70-98711580c2dd",
Nov 29 06:30:02 compute-0 suspicious_torvalds[143655]:         "type": "bluestore"
Nov 29 06:30:02 compute-0 suspicious_torvalds[143655]:     }
Nov 29 06:30:02 compute-0 suspicious_torvalds[143655]: }
Nov 29 06:30:02 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:30:02 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:30:02 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:30:02.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:30:02 compute-0 systemd[1]: libpod-e42c78a83699120790e930044ab02970fa4072b40ed4e370ef90a1e78c1ed642.scope: Deactivated successfully.
Nov 29 06:30:02 compute-0 podman[143637]: 2025-11-29 06:30:02.43026794 +0000 UTC m=+0.966855200 container died e42c78a83699120790e930044ab02970fa4072b40ed4e370ef90a1e78c1ed642 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_torvalds, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 29 06:30:02 compute-0 python3.9[143811]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 06:30:02 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:30:02 compute-0 systemd[1]: Reloading.
Nov 29 06:30:02 compute-0 systemd-rc-local-generator[143862]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 06:30:02 compute-0 systemd-sysv-generator[143866]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 06:30:02 compute-0 sudo[143809]: pam_unix(sudo:session): session closed for user root
Nov 29 06:30:03 compute-0 sudo[143976]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:30:03 compute-0 sudo[143976]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:30:03 compute-0 sudo[143976]: pam_unix(sudo:session): session closed for user root
Nov 29 06:30:03 compute-0 sudo[144014]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:30:03 compute-0 sudo[144014]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:30:03 compute-0 sudo[144014]: pam_unix(sudo:session): session closed for user root
Nov 29 06:30:03 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:30:03 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:30:03 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:30:03.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:30:03 compute-0 sudo[144076]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vkqfgottxjmlniyyxicqseebthmlrrst ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397803.0523646-1243-8856892163582/AnsiballZ_stat.py'
Nov 29 06:30:03 compute-0 sudo[144076]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:30:03 compute-0 ceph-mon[74654]: pgmap v527: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:30:03 compute-0 python3.9[144078]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:30:03 compute-0 sudo[144076]: pam_unix(sudo:session): session closed for user root
Nov 29 06:30:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-8f0601f0953f12c833d3e8421b3a883694f9f33fe8344b525b82dc31569bd96c-merged.mount: Deactivated successfully.
Nov 29 06:30:03 compute-0 sudo[144155]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hgduvwqzltecyzbawndhmvxqdirzjtqn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397803.0523646-1243-8856892163582/AnsiballZ_file.py'
Nov 29 06:30:03 compute-0 sudo[144155]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:30:04 compute-0 python3.9[144157]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:30:04 compute-0 sudo[144155]: pam_unix(sudo:session): session closed for user root
Nov 29 06:30:04 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v529: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:30:04 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:30:04 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:30:04 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:30:04.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:30:04 compute-0 podman[143637]: 2025-11-29 06:30:04.501778856 +0000 UTC m=+3.038366126 container remove e42c78a83699120790e930044ab02970fa4072b40ed4e370ef90a1e78c1ed642 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_torvalds, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 06:30:04 compute-0 systemd[1]: libpod-conmon-e42c78a83699120790e930044ab02970fa4072b40ed4e370ef90a1e78c1ed642.scope: Deactivated successfully.
Nov 29 06:30:04 compute-0 sudo[143327]: pam_unix(sudo:session): session closed for user root
Nov 29 06:30:04 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 06:30:04 compute-0 sudo[144307]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmaexbcvnilmxzaxqjafpjejppsvvxiy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397804.507441-1279-87811450603905/AnsiballZ_stat.py'
Nov 29 06:30:04 compute-0 sudo[144307]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:30:05 compute-0 python3.9[144310]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:30:05 compute-0 sudo[144307]: pam_unix(sudo:session): session closed for user root
Nov 29 06:30:05 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:30:05 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:30:05 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:30:05.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:30:05 compute-0 ceph-mon[74654]: pgmap v528: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:30:05 compute-0 sudo[144386]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ohnbsqlrxyhvssywrhwjpozcvoyjxhkl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397804.507441-1279-87811450603905/AnsiballZ_file.py'
Nov 29 06:30:05 compute-0 sudo[144386]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:30:05 compute-0 python3.9[144388]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:30:05 compute-0 sudo[144386]: pam_unix(sudo:session): session closed for user root
Nov 29 06:30:05 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:30:05 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 06:30:06 compute-0 sudo[144538]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhdxujgdzqpopdnjdsqtdqurfgwhssqc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397805.7232716-1315-21004422078503/AnsiballZ_systemd.py'
Nov 29 06:30:06 compute-0 sudo[144538]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:30:06 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:30:06 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev 2c3e7a83-4cfd-4cbc-915e-4b455314c20a does not exist
Nov 29 06:30:06 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev 04b0d6b0-f620-42c3-8516-3f41fd175b58 does not exist
Nov 29 06:30:06 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev ce673716-89ff-4575-951f-fea6e049d8ce does not exist
Nov 29 06:30:06 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v530: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:30:06 compute-0 sudo[144541]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:30:06 compute-0 sudo[144541]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:30:06 compute-0 sudo[144541]: pam_unix(sudo:session): session closed for user root
Nov 29 06:30:06 compute-0 python3.9[144540]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 06:30:06 compute-0 systemd[1]: Reloading.
Nov 29 06:30:06 compute-0 ceph-mon[74654]: pgmap v529: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:30:06 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:30:06 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:30:06 compute-0 systemd-sysv-generator[144621]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 06:30:06 compute-0 systemd-rc-local-generator[144618]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 06:30:06 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:30:06 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:30:06 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:30:06.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:30:06 compute-0 sudo[144566]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 06:30:06 compute-0 sudo[144566]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:30:06 compute-0 sudo[144566]: pam_unix(sudo:session): session closed for user root
Nov 29 06:30:06 compute-0 systemd[1]: Starting Create netns directory...
Nov 29 06:30:06 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 29 06:30:06 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 29 06:30:06 compute-0 systemd[1]: Finished Create netns directory.
Nov 29 06:30:06 compute-0 sudo[144538]: pam_unix(sudo:session): session closed for user root
Nov 29 06:30:07 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:30:07 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:30:07 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:30:07.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:30:07 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:30:07 compute-0 sudo[144783]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-duvrfhgikdrcyxmduoeguvdxcuiogqxb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397807.3156054-1345-276213374683161/AnsiballZ_file.py'
Nov 29 06:30:07 compute-0 sudo[144783]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:30:07 compute-0 python3.9[144785]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 06:30:07 compute-0 sudo[144783]: pam_unix(sudo:session): session closed for user root
Nov 29 06:30:08 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v531: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:30:08 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:30:08 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:30:08 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:30:08.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:30:08 compute-0 sudo[144935]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-praynpnlufafucnwjgmweyeyawztaoxg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397808.1351464-1369-68044897903705/AnsiballZ_stat.py'
Nov 29 06:30:08 compute-0 sudo[144935]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:30:08 compute-0 python3.9[144937]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:30:08 compute-0 sudo[144935]: pam_unix(sudo:session): session closed for user root
Nov 29 06:30:09 compute-0 sudo[145059]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pnbcixlnmcyalwhtwgnurzcriuwgrkde ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397808.1351464-1369-68044897903705/AnsiballZ_copy.py'
Nov 29 06:30:09 compute-0 sudo[145059]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:30:09 compute-0 python3.9[145061]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764397808.1351464-1369-68044897903705/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 29 06:30:09 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:30:09 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:30:09 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:30:09.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:30:09 compute-0 sudo[145059]: pam_unix(sudo:session): session closed for user root
Nov 29 06:30:10 compute-0 sudo[145211]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ncwizxqrgbshfyxutyiqxylazgryaqfc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397809.8133616-1420-192035249521712/AnsiballZ_file.py'
Nov 29 06:30:10 compute-0 sudo[145211]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:30:10 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v532: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:30:10 compute-0 python3.9[145213]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 06:30:10 compute-0 sudo[145211]: pam_unix(sudo:session): session closed for user root
Nov 29 06:30:10 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:30:10 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:30:10 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:30:10.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:30:10 compute-0 sudo[145364]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kopthliekiwlvzwugpscegnevuudsjzl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397810.6240585-1444-191987691393897/AnsiballZ_stat.py'
Nov 29 06:30:10 compute-0 sudo[145364]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:30:11 compute-0 python3.9[145366]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:30:11 compute-0 sudo[145364]: pam_unix(sudo:session): session closed for user root
Nov 29 06:30:11 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:30:11 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:30:11 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:30:11.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:30:11 compute-0 sudo[145487]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iejinnoluyradonqdfhvdkvhxzrjneyq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397810.6240585-1444-191987691393897/AnsiballZ_copy.py'
Nov 29 06:30:11 compute-0 sudo[145487]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:30:11 compute-0 python3.9[145489]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764397810.6240585-1444-191987691393897/.source.json _original_basename=.yqpy235r follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:30:11 compute-0 sudo[145487]: pam_unix(sudo:session): session closed for user root
Nov 29 06:30:12 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v533: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:30:12 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:30:12 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:30:12 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:30:12.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:30:12 compute-0 sudo[145639]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fiucbuazunefoffpyqudikznyyrfassk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397812.1357076-1489-159206893915486/AnsiballZ_file.py'
Nov 29 06:30:12 compute-0 sudo[145639]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:30:12 compute-0 python3.9[145641]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:30:12 compute-0 sudo[145639]: pam_unix(sudo:session): session closed for user root
Nov 29 06:30:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 06:30:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:30:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 06:30:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:30:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:30:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:30:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:30:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:30:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:30:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:30:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:30:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:30:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 06:30:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:30:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:30:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:30:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Nov 29 06:30:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:30:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 06:30:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:30:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:30:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:30:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 06:30:13 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:30:13 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:30:13 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:30:13.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:30:13 compute-0 sudo[145792]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zxqzuhsatgcshojeuuyjrnthjtgjhcua ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397813.0305312-1513-24037254580972/AnsiballZ_stat.py'
Nov 29 06:30:13 compute-0 sudo[145792]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:30:13 compute-0 sudo[145792]: pam_unix(sudo:session): session closed for user root
Nov 29 06:30:14 compute-0 sudo[145915]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-onwcgfzyeahzyjgavlrxuqdtiarkwsqr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397813.0305312-1513-24037254580972/AnsiballZ_copy.py'
Nov 29 06:30:14 compute-0 sudo[145915]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:30:14 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v534: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:30:14 compute-0 ceph-mon[74654]: pgmap v530: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:30:14 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:30:14 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:30:14 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:30:14.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:30:14 compute-0 sudo[145915]: pam_unix(sudo:session): session closed for user root
Nov 29 06:30:14 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:30:15 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:30:15 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:30:15 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:30:15.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:30:15 compute-0 sshd-session[145918]: Invalid user gitea from 104.208.108.166 port 11556
Nov 29 06:30:15 compute-0 sudo[146070]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iieztmqpbdugpjwnbcwdpkapdpblgjvb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397814.9583683-1564-35787317132789/AnsiballZ_container_config_data.py'
Nov 29 06:30:15 compute-0 sudo[146070]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:30:15 compute-0 sshd-session[145918]: Received disconnect from 104.208.108.166 port 11556:11: Bye Bye [preauth]
Nov 29 06:30:15 compute-0 sshd-session[145918]: Disconnected from invalid user gitea 104.208.108.166 port 11556 [preauth]
Nov 29 06:30:15 compute-0 python3.9[146072]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Nov 29 06:30:15 compute-0 sudo[146070]: pam_unix(sudo:session): session closed for user root
Nov 29 06:30:16 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v535: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:30:16 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:30:16 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:30:16 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:30:16.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:30:16 compute-0 sudo[146222]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wsbwdsaummopjuakindhextpmlewetqw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397815.9570594-1591-140838983354450/AnsiballZ_container_config_hash.py'
Nov 29 06:30:16 compute-0 sudo[146222]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:30:16 compute-0 python3.9[146224]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 29 06:30:16 compute-0 sudo[146222]: pam_unix(sudo:session): session closed for user root
Nov 29 06:30:17 compute-0 ceph-mon[74654]: pgmap v531: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:30:17 compute-0 ceph-mon[74654]: pgmap v532: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:30:17 compute-0 ceph-mon[74654]: pgmap v533: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:30:17 compute-0 ceph-mon[74654]: pgmap v534: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:30:17 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:30:17 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:30:17 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:30:17.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:30:17 compute-0 sudo[146375]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ywyfylidsstcyduigemutyfqczvwvqwf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397817.0235696-1618-247052152877131/AnsiballZ_podman_container_info.py'
Nov 29 06:30:17 compute-0 sudo[146375]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:30:17 compute-0 python3.9[146377]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Nov 29 06:30:18 compute-0 sudo[146375]: pam_unix(sudo:session): session closed for user root
Nov 29 06:30:18 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v536: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:30:18 compute-0 ceph-mon[74654]: pgmap v535: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:30:18 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:30:18 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 06:30:18 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:30:18.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 06:30:19 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:30:19 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:30:19 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:30:19.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:30:19 compute-0 sudo[146556]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-znoqyzielrplrhobefcukprhsercrqst ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764397818.9412508-1657-105143584978173/AnsiballZ_edpm_container_manage.py'
Nov 29 06:30:19 compute-0 ceph-mon[74654]: pgmap v536: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:30:19 compute-0 sudo[146556]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:30:19 compute-0 python3[146558]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Nov 29 06:30:19 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:30:20 compute-0 sshd-session[146522]: Invalid user zhangsan from 79.116.35.29 port 44154
Nov 29 06:30:20 compute-0 sshd-session[146522]: Received disconnect from 79.116.35.29 port 44154:11: Bye Bye [preauth]
Nov 29 06:30:20 compute-0 sshd-session[146522]: Disconnected from invalid user zhangsan 79.116.35.29 port 44154 [preauth]
Nov 29 06:30:20 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v537: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:30:20 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:30:20 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:30:20 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:30:20.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:30:21 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:30:21 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:30:21 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:30:21.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:30:21 compute-0 ceph-mon[74654]: pgmap v537: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:30:22 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v538: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:30:22 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:30:22 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:30:22 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:30:22.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:30:22 compute-0 ceph-mon[74654]: pgmap v538: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:30:23 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:30:23 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:30:23 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:30:23.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:30:23 compute-0 sudo[146624]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:30:23 compute-0 sudo[146624]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:30:23 compute-0 sudo[146624]: pam_unix(sudo:session): session closed for user root
Nov 29 06:30:23 compute-0 sudo[146649]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:30:23 compute-0 sudo[146649]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:30:23 compute-0 sudo[146649]: pam_unix(sudo:session): session closed for user root
Nov 29 06:30:23 compute-0 sshd-session[146622]: Invalid user hadoop from 31.6.212.12 port 49060
Nov 29 06:30:24 compute-0 sshd-session[146622]: Received disconnect from 31.6.212.12 port 49060:11: Bye Bye [preauth]
Nov 29 06:30:24 compute-0 sshd-session[146622]: Disconnected from invalid user hadoop 31.6.212.12 port 49060 [preauth]
Nov 29 06:30:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:30:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:30:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:30:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:30:24 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v539: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:30:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:30:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:30:24 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:30:24 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:30:24 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:30:24.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:30:24 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:30:25 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:30:25 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:30:25 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:30:25.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:30:26 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v540: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:30:26 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:30:26 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:30:26 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:30:26.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:30:27 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:30:27 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:30:27 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:30:27.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:30:28 compute-0 ceph-mon[74654]: pgmap v539: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:30:28 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v541: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:30:28 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:30:28 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:30:28 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:30:28.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:30:29 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:30:29 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:30:29 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:30:29.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:30:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 06:30:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 06:30:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 06:30:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 06:30:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 06:30:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 06:30:29 compute-0 ceph-mon[74654]: pgmap v540: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:30:29 compute-0 ceph-mon[74654]: pgmap v541: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:30:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 06:30:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 06:30:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 06:30:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 06:30:29 compute-0 podman[146572]: 2025-11-29 06:30:29.678281414 +0000 UTC m=+9.766168840 image pull 52cb1910f3f090372807028d1c2aea98d2557b1086636469529f290368ecdf69 quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Nov 29 06:30:29 compute-0 podman[146747]: 2025-11-29 06:30:29.811910665 +0000 UTC m=+0.042226957 container create b3f42e9a710907b47913576d27471d163da731262c1464357cff24681ce600c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Nov 29 06:30:29 compute-0 podman[146747]: 2025-11-29 06:30:29.790117953 +0000 UTC m=+0.020434245 image pull 52cb1910f3f090372807028d1c2aea98d2557b1086636469529f290368ecdf69 quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Nov 29 06:30:29 compute-0 python3[146558]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Nov 29 06:30:29 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:30:29 compute-0 sudo[146556]: pam_unix(sudo:session): session closed for user root
Nov 29 06:30:30 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v542: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:30:30 compute-0 sudo[146933]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yiqcawjkklduoclgbcsqhghyodsqrfml ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397830.0990918-1681-280977504667915/AnsiballZ_stat.py'
Nov 29 06:30:30 compute-0 sudo[146933]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:30:30 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:30:30 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:30:30 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:30:30.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:30:30 compute-0 python3.9[146935]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 06:30:30 compute-0 sudo[146933]: pam_unix(sudo:session): session closed for user root
Nov 29 06:30:31 compute-0 sudo[147088]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vupvhjvqgevqpdmtdfjvfyvbxhkvibey ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397830.9880135-1708-183182563973686/AnsiballZ_file.py'
Nov 29 06:30:31 compute-0 sudo[147088]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:30:31 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:30:31 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:30:31 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:30:31.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:30:31 compute-0 python3.9[147090]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:30:31 compute-0 sudo[147088]: pam_unix(sudo:session): session closed for user root
Nov 29 06:30:32 compute-0 sudo[147164]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uiiyqqghpiowriomybolootpxomdycmr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397830.9880135-1708-183182563973686/AnsiballZ_stat.py'
Nov 29 06:30:32 compute-0 sudo[147164]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:30:32 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v543: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:30:32 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:30:32 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 06:30:32 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:30:32.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 06:30:32 compute-0 python3.9[147166]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 06:30:32 compute-0 sudo[147164]: pam_unix(sudo:session): session closed for user root
Nov 29 06:30:33 compute-0 sudo[147316]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-efirrgsmyktzgwlxokmehqkxwnirfker ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397832.6898708-1708-64719301600252/AnsiballZ_copy.py'
Nov 29 06:30:33 compute-0 sudo[147316]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:30:33 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:30:33 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:30:33 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:30:33.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:30:34 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v544: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:30:34 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:30:34 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:30:34 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:30:34.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:30:35 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:30:35 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.002000055s ======
Nov 29 06:30:35 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:30:35.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000055s
Nov 29 06:30:36 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v545: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:30:36 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:30:36 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:30:36 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:30:36.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:30:36 compute-0 sshd-session[147320]: Invalid user mcserver from 176.109.67.96 port 39794
Nov 29 06:30:36 compute-0 sshd-session[147320]: Received disconnect from 176.109.67.96 port 39794:11: Bye Bye [preauth]
Nov 29 06:30:36 compute-0 sshd-session[147320]: Disconnected from invalid user mcserver 176.109.67.96 port 39794 [preauth]
Nov 29 06:30:36 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:30:36 compute-0 ceph-mon[74654]: pgmap v542: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:30:37 compute-0 python3.9[147318]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764397832.6898708-1708-64719301600252/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:30:37 compute-0 sudo[147316]: pam_unix(sudo:session): session closed for user root
Nov 29 06:30:37 compute-0 sudo[147396]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-selkwppqacyhxrjhnckgxbrrbzahhghd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397832.6898708-1708-64719301600252/AnsiballZ_systemd.py'
Nov 29 06:30:37 compute-0 sudo[147396]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:30:37 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:30:37 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:30:37 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:30:37.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:30:37 compute-0 python3.9[147398]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 29 06:30:37 compute-0 systemd[1]: Reloading.
Nov 29 06:30:37 compute-0 systemd-rc-local-generator[147429]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 06:30:37 compute-0 systemd-sysv-generator[147432]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 06:30:37 compute-0 sudo[147396]: pam_unix(sudo:session): session closed for user root
Nov 29 06:30:38 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v546: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:30:38 compute-0 sudo[147510]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wwdztbmhctllzbpcptekawrczyacdwcd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397832.6898708-1708-64719301600252/AnsiballZ_systemd.py'
Nov 29 06:30:38 compute-0 sudo[147510]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:30:38 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:30:38 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:30:38 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:30:38.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:30:38 compute-0 python3.9[147512]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 06:30:38 compute-0 systemd[1]: Reloading.
Nov 29 06:30:38 compute-0 systemd-rc-local-generator[147537]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 06:30:38 compute-0 systemd-sysv-generator[147545]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 06:30:38 compute-0 sshd-session[147400]: Received disconnect from 49.247.35.31 port 36185:11: Bye Bye [preauth]
Nov 29 06:30:38 compute-0 sshd-session[147400]: Disconnected from authenticating user root 49.247.35.31 port 36185 [preauth]
Nov 29 06:30:39 compute-0 systemd[1]: Starting ovn_controller container...
Nov 29 06:30:39 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:30:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f8baf2007c97915a8b8de2e1107524df74412b4e46fb38e4f4437d65da64f4c/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Nov 29 06:30:39 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:30:39 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:30:39 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:30:39.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:30:39 compute-0 ceph-mon[74654]: pgmap v543: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:30:39 compute-0 ceph-mon[74654]: pgmap v544: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:30:39 compute-0 ceph-mon[74654]: pgmap v545: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:30:39 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run b3f42e9a710907b47913576d27471d163da731262c1464357cff24681ce600c7.
Nov 29 06:30:39 compute-0 podman[147554]: 2025-11-29 06:30:39.469276729 +0000 UTC m=+0.440855806 container init b3f42e9a710907b47913576d27471d163da731262c1464357cff24681ce600c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 29 06:30:39 compute-0 ovn_controller[147569]: + sudo -E kolla_set_configs
Nov 29 06:30:39 compute-0 podman[147554]: 2025-11-29 06:30:39.512082872 +0000 UTC m=+0.483661859 container start b3f42e9a710907b47913576d27471d163da731262c1464357cff24681ce600c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 06:30:39 compute-0 systemd[1]: Created slice User Slice of UID 0.
Nov 29 06:30:39 compute-0 edpm-start-podman-container[147554]: ovn_controller
Nov 29 06:30:39 compute-0 systemd[1]: Starting User Runtime Directory /run/user/0...
Nov 29 06:30:39 compute-0 systemd[1]: Finished User Runtime Directory /run/user/0.
Nov 29 06:30:39 compute-0 systemd[1]: Starting User Manager for UID 0...
Nov 29 06:30:39 compute-0 systemd[147597]: pam_unix(systemd-user:session): session opened for user root(uid=0) by root(uid=0)
Nov 29 06:30:39 compute-0 edpm-start-podman-container[147553]: Creating additional drop-in dependency for "ovn_controller" (b3f42e9a710907b47913576d27471d163da731262c1464357cff24681ce600c7)
Nov 29 06:30:39 compute-0 podman[147575]: 2025-11-29 06:30:39.624839276 +0000 UTC m=+0.094258664 container health_status b3f42e9a710907b47913576d27471d163da731262c1464357cff24681ce600c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Nov 29 06:30:39 compute-0 systemd[1]: Reloading.
Nov 29 06:30:39 compute-0 systemd-rc-local-generator[147652]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 06:30:39 compute-0 systemd-sysv-generator[147656]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 06:30:39 compute-0 systemd[147597]: Queued start job for default target Main User Target.
Nov 29 06:30:39 compute-0 systemd[147597]: Created slice User Application Slice.
Nov 29 06:30:39 compute-0 systemd[147597]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Nov 29 06:30:39 compute-0 systemd[147597]: Started Daily Cleanup of User's Temporary Directories.
Nov 29 06:30:39 compute-0 systemd[147597]: Reached target Paths.
Nov 29 06:30:39 compute-0 systemd[147597]: Reached target Timers.
Nov 29 06:30:39 compute-0 systemd[147597]: Starting D-Bus User Message Bus Socket...
Nov 29 06:30:39 compute-0 systemd[147597]: Starting Create User's Volatile Files and Directories...
Nov 29 06:30:39 compute-0 systemd[147597]: Finished Create User's Volatile Files and Directories.
Nov 29 06:30:39 compute-0 systemd[147597]: Listening on D-Bus User Message Bus Socket.
Nov 29 06:30:39 compute-0 systemd[147597]: Reached target Sockets.
Nov 29 06:30:39 compute-0 systemd[147597]: Reached target Basic System.
Nov 29 06:30:39 compute-0 systemd[147597]: Reached target Main User Target.
Nov 29 06:30:39 compute-0 systemd[147597]: Startup finished in 148ms.
Nov 29 06:30:39 compute-0 systemd[1]: Started User Manager for UID 0.
Nov 29 06:30:39 compute-0 systemd[1]: Started ovn_controller container.
Nov 29 06:30:39 compute-0 systemd[1]: b3f42e9a710907b47913576d27471d163da731262c1464357cff24681ce600c7-5bed9f1a8190501d.service: Main process exited, code=exited, status=1/FAILURE
Nov 29 06:30:39 compute-0 systemd[1]: b3f42e9a710907b47913576d27471d163da731262c1464357cff24681ce600c7-5bed9f1a8190501d.service: Failed with result 'exit-code'.
Nov 29 06:30:39 compute-0 systemd[1]: Started Session c1 of User root.
Nov 29 06:30:39 compute-0 sudo[147510]: pam_unix(sudo:session): session closed for user root
Nov 29 06:30:39 compute-0 ovn_controller[147569]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 29 06:30:39 compute-0 ovn_controller[147569]: INFO:__main__:Validating config file
Nov 29 06:30:39 compute-0 ovn_controller[147569]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 29 06:30:39 compute-0 ovn_controller[147569]: INFO:__main__:Writing out command to execute
Nov 29 06:30:39 compute-0 systemd[1]: session-c1.scope: Deactivated successfully.
Nov 29 06:30:39 compute-0 ovn_controller[147569]: ++ cat /run_command
Nov 29 06:30:39 compute-0 ovn_controller[147569]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Nov 29 06:30:39 compute-0 ovn_controller[147569]: + ARGS=
Nov 29 06:30:39 compute-0 ovn_controller[147569]: + sudo kolla_copy_cacerts
Nov 29 06:30:40 compute-0 systemd[1]: Started Session c2 of User root.
Nov 29 06:30:40 compute-0 systemd[1]: session-c2.scope: Deactivated successfully.
Nov 29 06:30:40 compute-0 ovn_controller[147569]: + [[ ! -n '' ]]
Nov 29 06:30:40 compute-0 ovn_controller[147569]: + . kolla_extend_start
Nov 29 06:30:40 compute-0 ovn_controller[147569]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Nov 29 06:30:40 compute-0 ovn_controller[147569]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Nov 29 06:30:40 compute-0 ovn_controller[147569]: + umask 0022
Nov 29 06:30:40 compute-0 ovn_controller[147569]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Nov 29 06:30:40 compute-0 ovn_controller[147569]: 2025-11-29T06:30:40Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Nov 29 06:30:40 compute-0 ovn_controller[147569]: 2025-11-29T06:30:40Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Nov 29 06:30:40 compute-0 ovn_controller[147569]: 2025-11-29T06:30:40Z|00003|main|INFO|OVN internal version is : [24.03.7-20.33.0-76.8]
Nov 29 06:30:40 compute-0 ovn_controller[147569]: 2025-11-29T06:30:40Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Nov 29 06:30:40 compute-0 ovn_controller[147569]: 2025-11-29T06:30:40Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Nov 29 06:30:40 compute-0 ovn_controller[147569]: 2025-11-29T06:30:40Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Nov 29 06:30:40 compute-0 NetworkManager[49224]: <info>  [1764397840.0530] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Nov 29 06:30:40 compute-0 NetworkManager[49224]: <info>  [1764397840.0539] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 06:30:40 compute-0 NetworkManager[49224]: <info>  [1764397840.0552] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Nov 29 06:30:40 compute-0 NetworkManager[49224]: <info>  [1764397840.0559] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Nov 29 06:30:40 compute-0 NetworkManager[49224]: <info>  [1764397840.0564] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Nov 29 06:30:40 compute-0 kernel: br-int: entered promiscuous mode
Nov 29 06:30:40 compute-0 ovn_controller[147569]: 2025-11-29T06:30:40Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Nov 29 06:30:40 compute-0 ovn_controller[147569]: 2025-11-29T06:30:40Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 29 06:30:40 compute-0 ovn_controller[147569]: 2025-11-29T06:30:40Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 29 06:30:40 compute-0 ovn_controller[147569]: 2025-11-29T06:30:40Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Nov 29 06:30:40 compute-0 ovn_controller[147569]: 2025-11-29T06:30:40Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Nov 29 06:30:40 compute-0 ovn_controller[147569]: 2025-11-29T06:30:40Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Nov 29 06:30:40 compute-0 ovn_controller[147569]: 2025-11-29T06:30:40Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Nov 29 06:30:40 compute-0 ovn_controller[147569]: 2025-11-29T06:30:40Z|00014|main|INFO|OVS feature set changed, force recompute.
Nov 29 06:30:40 compute-0 ovn_controller[147569]: 2025-11-29T06:30:40Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 29 06:30:40 compute-0 ovn_controller[147569]: 2025-11-29T06:30:40Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 29 06:30:40 compute-0 ovn_controller[147569]: 2025-11-29T06:30:40Z|00017|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 29 06:30:40 compute-0 ovn_controller[147569]: 2025-11-29T06:30:40Z|00018|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Nov 29 06:30:40 compute-0 ovn_controller[147569]: 2025-11-29T06:30:40Z|00019|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Nov 29 06:30:40 compute-0 ovn_controller[147569]: 2025-11-29T06:30:40Z|00020|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 29 06:30:40 compute-0 ovn_controller[147569]: 2025-11-29T06:30:40Z|00021|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Nov 29 06:30:40 compute-0 ovn_controller[147569]: 2025-11-29T06:30:40Z|00022|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Nov 29 06:30:40 compute-0 ovn_controller[147569]: 2025-11-29T06:30:40Z|00023|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Nov 29 06:30:40 compute-0 ovn_controller[147569]: 2025-11-29T06:30:40Z|00024|main|INFO|OVS feature set changed, force recompute.
Nov 29 06:30:40 compute-0 ovn_controller[147569]: 2025-11-29T06:30:40Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 29 06:30:40 compute-0 ovn_controller[147569]: 2025-11-29T06:30:40Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 29 06:30:40 compute-0 ovn_controller[147569]: 2025-11-29T06:30:40Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 29 06:30:40 compute-0 ovn_controller[147569]: 2025-11-29T06:30:40Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 29 06:30:40 compute-0 ovn_controller[147569]: 2025-11-29T06:30:40Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 29 06:30:40 compute-0 NetworkManager[49224]: <info>  [1764397840.0791] manager: (ovn-2fa832-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Nov 29 06:30:40 compute-0 NetworkManager[49224]: <info>  [1764397840.0799] manager: (ovn-e15f55-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/20)
Nov 29 06:30:40 compute-0 NetworkManager[49224]: <info>  [1764397840.0806] manager: (ovn-fa6f2e-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/21)
Nov 29 06:30:40 compute-0 kernel: genev_sys_6081: entered promiscuous mode
Nov 29 06:30:40 compute-0 systemd-udevd[147723]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 06:30:40 compute-0 systemd-udevd[147725]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 06:30:40 compute-0 NetworkManager[49224]: <info>  [1764397840.0964] device (genev_sys_6081): carrier: link connected
Nov 29 06:30:40 compute-0 NetworkManager[49224]: <info>  [1764397840.0966] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/22)
Nov 29 06:30:40 compute-0 ovn_controller[147569]: 2025-11-29T06:30:40Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 29 06:30:40 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v547: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:30:40 compute-0 ceph-mon[74654]: pgmap v546: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:30:40 compute-0 sudo[147833]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tsxueleefxqqmtujurdofthypdkighjz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397840.1391482-1792-252943044603136/AnsiballZ_command.py'
Nov 29 06:30:40 compute-0 sudo[147833]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:30:40 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:30:40 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:30:40 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:30:40.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:30:40 compute-0 python3.9[147835]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:30:40 compute-0 ovs-vsctl[147836]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Nov 29 06:30:40 compute-0 sudo[147833]: pam_unix(sudo:session): session closed for user root
Nov 29 06:30:41 compute-0 sudo[147987]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uxvbikzymmsxxrsolnxhktbqxwmouwhl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397840.911724-1816-40105852765425/AnsiballZ_command.py'
Nov 29 06:30:41 compute-0 sudo[147987]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:30:41 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:30:41 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:30:41 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:30:41.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:30:41 compute-0 python3.9[147989]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:30:41 compute-0 ovs-vsctl[147991]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Nov 29 06:30:41 compute-0 sudo[147987]: pam_unix(sudo:session): session closed for user root
Nov 29 06:30:41 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:30:42 compute-0 ceph-mon[74654]: pgmap v547: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:30:42 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v548: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:30:42 compute-0 sshd-session[148040]: Invalid user mark from 162.214.92.14 port 33288
Nov 29 06:30:42 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:30:42 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:30:42 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:30:42.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:30:42 compute-0 sshd-session[148040]: Received disconnect from 162.214.92.14 port 33288:11: Bye Bye [preauth]
Nov 29 06:30:42 compute-0 sshd-session[148040]: Disconnected from invalid user mark 162.214.92.14 port 33288 [preauth]
Nov 29 06:30:42 compute-0 sudo[148144]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mjmdegrfwxdnnjzgskjhjfmxilfqigtu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397842.0791829-1858-69062985399904/AnsiballZ_command.py'
Nov 29 06:30:42 compute-0 sudo[148144]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:30:42 compute-0 python3.9[148146]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:30:42 compute-0 ovs-vsctl[148147]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Nov 29 06:30:42 compute-0 sudo[148144]: pam_unix(sudo:session): session closed for user root
Nov 29 06:30:43 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:30:43 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:30:43 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:30:43.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:30:43 compute-0 sshd-session[135842]: Connection closed by 192.168.122.30 port 45866
Nov 29 06:30:43 compute-0 sshd-session[135789]: pam_unix(sshd:session): session closed for user zuul
Nov 29 06:30:43 compute-0 systemd-logind[797]: Session 46 logged out. Waiting for processes to exit.
Nov 29 06:30:43 compute-0 systemd[1]: session-46.scope: Deactivated successfully.
Nov 29 06:30:43 compute-0 systemd[1]: session-46.scope: Consumed 1min 1.999s CPU time.
Nov 29 06:30:43 compute-0 systemd-logind[797]: Removed session 46.
Nov 29 06:30:43 compute-0 ceph-mon[74654]: pgmap v548: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:30:43 compute-0 sudo[148173]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:30:43 compute-0 sudo[148173]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:30:43 compute-0 sudo[148173]: pam_unix(sudo:session): session closed for user root
Nov 29 06:30:43 compute-0 sudo[148198]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:30:43 compute-0 sudo[148198]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:30:43 compute-0 sudo[148198]: pam_unix(sudo:session): session closed for user root
Nov 29 06:30:44 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v549: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:30:44 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:30:44 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:30:44 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:30:44.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:30:45 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:30:45 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:30:45 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:30:45.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:30:46 compute-0 ceph-mon[74654]: pgmap v549: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:30:46 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v550: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:30:46 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:30:46 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:30:46 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:30:46.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:30:46 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:30:47 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:30:47 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:30:47 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:30:47.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:30:48 compute-0 sshd-session[148225]: Received disconnect from 138.124.186.225 port 35982:11: Bye Bye [preauth]
Nov 29 06:30:48 compute-0 sshd-session[148225]: Disconnected from authenticating user root 138.124.186.225 port 35982 [preauth]
Nov 29 06:30:48 compute-0 ceph-mon[74654]: pgmap v550: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:30:48 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v551: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:30:48 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:30:48 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 06:30:48 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:30:48.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 06:30:48 compute-0 sshd-session[148227]: Accepted publickey for zuul from 192.168.122.30 port 45592 ssh2: ECDSA SHA256:q0RMlXdalxA6snNWza7TmIndlwLWLLpO+sXhiGKqO/I
Nov 29 06:30:48 compute-0 systemd-logind[797]: New session 48 of user zuul.
Nov 29 06:30:48 compute-0 systemd[1]: Started Session 48 of User zuul.
Nov 29 06:30:48 compute-0 sshd-session[148227]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 06:30:49 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:30:49 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 06:30:49 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:30:49.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 06:30:50 compute-0 systemd[1]: Stopping User Manager for UID 0...
Nov 29 06:30:50 compute-0 systemd[147597]: Activating special unit Exit the Session...
Nov 29 06:30:50 compute-0 systemd[147597]: Stopped target Main User Target.
Nov 29 06:30:50 compute-0 systemd[147597]: Stopped target Basic System.
Nov 29 06:30:50 compute-0 systemd[147597]: Stopped target Paths.
Nov 29 06:30:50 compute-0 systemd[147597]: Stopped target Sockets.
Nov 29 06:30:50 compute-0 systemd[147597]: Stopped target Timers.
Nov 29 06:30:50 compute-0 systemd[147597]: Stopped Daily Cleanup of User's Temporary Directories.
Nov 29 06:30:50 compute-0 systemd[147597]: Closed D-Bus User Message Bus Socket.
Nov 29 06:30:50 compute-0 systemd[147597]: Stopped Create User's Volatile Files and Directories.
Nov 29 06:30:50 compute-0 systemd[147597]: Removed slice User Application Slice.
Nov 29 06:30:50 compute-0 systemd[147597]: Reached target Shutdown.
Nov 29 06:30:50 compute-0 systemd[147597]: Finished Exit the Session.
Nov 29 06:30:50 compute-0 systemd[147597]: Reached target Exit the Session.
Nov 29 06:30:50 compute-0 systemd[1]: user@0.service: Deactivated successfully.
Nov 29 06:30:50 compute-0 systemd[1]: Stopped User Manager for UID 0.
Nov 29 06:30:50 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/0...
Nov 29 06:30:50 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v552: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:30:50 compute-0 systemd[1]: run-user-0.mount: Deactivated successfully.
Nov 29 06:30:50 compute-0 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Nov 29 06:30:50 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/0.
Nov 29 06:30:50 compute-0 systemd[1]: Removed slice User Slice of UID 0.
Nov 29 06:30:50 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:30:50 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 06:30:50 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:30:50.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 06:30:51 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:30:51 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 06:30:51 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:30:51.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 06:30:51 compute-0 python3.9[148386]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 06:30:52 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v553: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:30:52 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:30:52 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:30:52 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:30:52.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:30:52 compute-0 sudo[148543]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cviatjoxzemtckjguafegvshlfcysifh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397852.0703127-67-108417988215894/AnsiballZ_file.py'
Nov 29 06:30:52 compute-0 sudo[148543]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:30:52 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:30:52 compute-0 python3.9[148545]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 29 06:30:52 compute-0 sudo[148543]: pam_unix(sudo:session): session closed for user root
Nov 29 06:30:53 compute-0 sshd-session[148415]: Invalid user odoo15 from 103.147.159.91 port 53454
Nov 29 06:30:53 compute-0 sudo[148696]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lvqjlooawfguulbhsnzvzxxbhmjzmosk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397853.0387554-67-164063621833055/AnsiballZ_file.py'
Nov 29 06:30:53 compute-0 sudo[148696]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:30:53 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:30:53 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 06:30:53 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:30:53.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 06:30:53 compute-0 sshd-session[148415]: Received disconnect from 103.147.159.91 port 53454:11: Bye Bye [preauth]
Nov 29 06:30:53 compute-0 sshd-session[148415]: Disconnected from invalid user odoo15 103.147.159.91 port 53454 [preauth]
Nov 29 06:30:53 compute-0 python3.9[148698]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 06:30:53 compute-0 sudo[148696]: pam_unix(sudo:session): session closed for user root
Nov 29 06:30:53 compute-0 ceph-mon[74654]: pgmap v551: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:30:53 compute-0 ceph-mon[74654]: pgmap v552: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:30:53 compute-0 ceph-mon[74654]: pgmap v553: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:30:54 compute-0 sudo[148848]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ezfsgnxgiqddcumvmtiowcyqvvkdvwjq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397853.7827322-67-278395396044868/AnsiballZ_file.py'
Nov 29 06:30:54 compute-0 sudo[148848]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:30:54 compute-0 ceph-mgr[74948]: [balancer INFO root] Optimize plan auto_2025-11-29_06:30:54
Nov 29 06:30:54 compute-0 ceph-mgr[74948]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 06:30:54 compute-0 ceph-mgr[74948]: [balancer INFO root] do_upmap
Nov 29 06:30:54 compute-0 ceph-mgr[74948]: [balancer INFO root] pools ['images', 'volumes', 'backups', 'cephfs.cephfs.meta', 'default.rgw.log', '.mgr', 'default.rgw.meta', 'vms', 'default.rgw.control', '.rgw.root', 'cephfs.cephfs.data']
Nov 29 06:30:54 compute-0 ceph-mgr[74948]: [balancer INFO root] prepared 0/10 changes
Nov 29 06:30:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:30:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:30:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:30:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:30:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:30:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:30:54 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v554: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:30:54 compute-0 python3.9[148850]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 06:30:54 compute-0 sudo[148848]: pam_unix(sudo:session): session closed for user root
Nov 29 06:30:54 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:30:54 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:30:54 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:30:54.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:30:54 compute-0 sudo[149000]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-brloavimxbsreajlywbeghfdnzkmtkye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397854.5637627-67-19784055423517/AnsiballZ_file.py'
Nov 29 06:30:54 compute-0 sudo[149000]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:30:55 compute-0 python3.9[149003]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 06:30:55 compute-0 sudo[149000]: pam_unix(sudo:session): session closed for user root
Nov 29 06:30:55 compute-0 ceph-mon[74654]: pgmap v554: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:30:55 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:30:55 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:30:55 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:30:55.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:30:55 compute-0 sudo[149155]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-usxqalnglwoxsnapiswryzdrjajquqgq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397855.2533894-67-273942394984709/AnsiballZ_file.py'
Nov 29 06:30:55 compute-0 sudo[149155]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:30:55 compute-0 python3.9[149157]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 06:30:55 compute-0 sudo[149155]: pam_unix(sudo:session): session closed for user root
Nov 29 06:30:56 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v555: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:30:56 compute-0 sshd-session[149004]: Received disconnect from 118.193.39.127 port 56772:11: Bye Bye [preauth]
Nov 29 06:30:56 compute-0 sshd-session[149004]: Disconnected from authenticating user root 118.193.39.127 port 56772 [preauth]
Nov 29 06:30:56 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:30:56 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:30:56 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:30:56.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:30:56 compute-0 python3.9[149307]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 06:30:57 compute-0 sudo[149458]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wqmbbveonfmzbiqycrbrpratmuulgogj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397856.8148196-199-88135291267524/AnsiballZ_seboolean.py'
Nov 29 06:30:57 compute-0 sudo[149458]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:30:57 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:30:57 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:30:57 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:30:57.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:30:57 compute-0 python3.9[149460]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Nov 29 06:30:57 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:30:57 compute-0 ceph-mon[74654]: pgmap v555: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:30:58 compute-0 sudo[149458]: pam_unix(sudo:session): session closed for user root
Nov 29 06:30:58 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v556: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:30:58 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:30:58 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:30:58 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:30:58.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:30:59 compute-0 python3.9[149612]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:30:59 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:30:59 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:30:59 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:30:59.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:30:59 compute-0 ceph-mon[74654]: pgmap v556: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:30:59 compute-0 python3.9[149733]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764397858.4473093-223-256574200726365/.source follow=False _original_basename=haproxy.j2 checksum=95c62e64c8f82dd9393a560d1b052dc98d38f810 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 06:31:00 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v557: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:31:00 compute-0 python3.9[149883]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:31:00 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:31:00 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:31:00 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:31:00.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:31:00 compute-0 python3.9[150004]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764397859.9621308-268-100609296722883/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 06:31:01 compute-0 radosgw[93592]: INFO: RGWReshardLock::lock found lock on reshard.0000000000 to be held by another RGW process; skipping for now
Nov 29 06:31:01 compute-0 radosgw[93592]: INFO: RGWReshardLock::lock found lock on reshard.0000000002 to be held by another RGW process; skipping for now
Nov 29 06:31:01 compute-0 radosgw[93592]: INFO: RGWReshardLock::lock found lock on reshard.0000000013 to be held by another RGW process; skipping for now
Nov 29 06:31:01 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:31:01 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:31:01 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:31:01.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:31:01 compute-0 ceph-mon[74654]: pgmap v557: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:31:01 compute-0 sudo[150155]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ijpjpkpmtjjcclbtcwbzxtrgorjmjtgj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397861.5661185-319-178726898696105/AnsiballZ_setup.py'
Nov 29 06:31:01 compute-0 sudo[150155]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:31:02 compute-0 python3.9[150157]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 06:31:02 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v558: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 6.2 KiB/s rd, 0 B/s wr, 10 op/s
Nov 29 06:31:02 compute-0 sudo[150155]: pam_unix(sudo:session): session closed for user root
Nov 29 06:31:02 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:31:02 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:31:02 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:31:02.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:31:02 compute-0 sudo[150240]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xpelpirxmmdxaczwyzaparijycngplor ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397861.5661185-319-178726898696105/AnsiballZ_dnf.py'
Nov 29 06:31:02 compute-0 sudo[150240]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:31:03 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:31:03 compute-0 python3.9[150242]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 06:31:03 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:31:03 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:31:03 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:31:03.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:31:03 compute-0 sudo[150244]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:31:03 compute-0 sudo[150244]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:31:03 compute-0 sudo[150244]: pam_unix(sudo:session): session closed for user root
Nov 29 06:31:03 compute-0 ceph-mon[74654]: pgmap v558: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 6.2 KiB/s rd, 0 B/s wr, 10 op/s
Nov 29 06:31:03 compute-0 sudo[150269]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:31:03 compute-0 sudo[150269]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:31:03 compute-0 sudo[150269]: pam_unix(sudo:session): session closed for user root
Nov 29 06:31:04 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v559: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 40 KiB/s rd, 0 B/s wr, 66 op/s
Nov 29 06:31:04 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:31:04 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:31:04 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:31:04.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:31:04 compute-0 sudo[150240]: pam_unix(sudo:session): session closed for user root
Nov 29 06:31:05 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:31:05 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:31:05 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:31:05.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:31:05 compute-0 sudo[150444]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xbkjdsoektsygiewlexnbrhezbrihmzb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397864.9180176-355-216874970023403/AnsiballZ_systemd.py'
Nov 29 06:31:05 compute-0 sudo[150444]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:31:05 compute-0 ceph-mon[74654]: pgmap v559: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 40 KiB/s rd, 0 B/s wr, 66 op/s
Nov 29 06:31:05 compute-0 python3.9[150446]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 29 06:31:05 compute-0 sudo[150444]: pam_unix(sudo:session): session closed for user root
Nov 29 06:31:06 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v560: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 40 KiB/s rd, 0 B/s wr, 66 op/s
Nov 29 06:31:06 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:31:06 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:31:06 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:31:06.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:31:06 compute-0 python3.9[150599]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:31:07 compute-0 sudo[150722]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:31:07 compute-0 sudo[150722]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:31:07 compute-0 sudo[150722]: pam_unix(sudo:session): session closed for user root
Nov 29 06:31:07 compute-0 sudo[150747]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:31:07 compute-0 sudo[150747]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:31:07 compute-0 sudo[150747]: pam_unix(sudo:session): session closed for user root
Nov 29 06:31:07 compute-0 sudo[150772]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:31:07 compute-0 sudo[150772]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:31:07 compute-0 sudo[150772]: pam_unix(sudo:session): session closed for user root
Nov 29 06:31:07 compute-0 python3.9[150721]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764397866.1312451-379-143883420357389/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 06:31:07 compute-0 sudo[150797]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 06:31:07 compute-0 sudo[150797]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:31:07 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:31:07 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:31:07 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:31:07.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:31:07 compute-0 sudo[150797]: pam_unix(sudo:session): session closed for user root
Nov 29 06:31:07 compute-0 python3.9[151003]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:31:08 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:31:08 compute-0 ceph-mon[74654]: pgmap v560: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 40 KiB/s rd, 0 B/s wr, 66 op/s
Nov 29 06:31:08 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v561: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 67 KiB/s rd, 0 B/s wr, 112 op/s
Nov 29 06:31:08 compute-0 python3.9[151124]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764397867.4019048-379-45360342350863/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 06:31:08 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:31:08 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:31:08 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:31:08.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:31:08 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 06:31:09 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:31:09 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 06:31:09 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:31:09 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:31:09 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 06:31:09 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:31:09.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 06:31:09 compute-0 ceph-mon[74654]: pgmap v561: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 67 KiB/s rd, 0 B/s wr, 112 op/s
Nov 29 06:31:09 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:31:09 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:31:09 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Nov 29 06:31:09 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 29 06:31:09 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 06:31:09 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:31:09 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 06:31:09 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 06:31:09 compute-0 python3.9[151275]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:31:09 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 06:31:10 compute-0 ovn_controller[147569]: 2025-11-29T06:31:10Z|00025|memory|INFO|16384 kB peak resident set size after 30.1 seconds
Nov 29 06:31:10 compute-0 ovn_controller[147569]: 2025-11-29T06:31:10Z|00026|memory|INFO|idl-cells-OVN_Southbound:273 idl-cells-Open_vSwitch:642 ofctrl_desired_flow_usage-KB:7 ofctrl_installed_flow_usage-KB:5 ofctrl_sb_flow_ref_usage-KB:2
Nov 29 06:31:10 compute-0 podman[151276]: 2025-11-29 06:31:10.176699763 +0000 UTC m=+0.142002783 container health_status b3f42e9a710907b47913576d27471d163da731262c1464357cff24681ce600c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller)
Nov 29 06:31:10 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:31:10 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev 90de5ce7-7af4-4422-9b1f-ec8b6115f9af does not exist
Nov 29 06:31:10 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev 86f4d47d-5d09-4c4b-ac22-c0ff46ab5c73 does not exist
Nov 29 06:31:10 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev 0a459b10-fd8c-4dcd-a4fb-61d5b119accf does not exist
Nov 29 06:31:10 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 06:31:10 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 06:31:10 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 06:31:10 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 06:31:10 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 06:31:10 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:31:10 compute-0 sudo[151370]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:31:10 compute-0 sudo[151370]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:31:10 compute-0 sudo[151370]: pam_unix(sudo:session): session closed for user root
Nov 29 06:31:10 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v562: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 75 KiB/s rd, 0 B/s wr, 124 op/s
Nov 29 06:31:10 compute-0 sudo[151419]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:31:10 compute-0 sudo[151419]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:31:10 compute-0 sudo[151419]: pam_unix(sudo:session): session closed for user root
Nov 29 06:31:10 compute-0 sudo[151471]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:31:10 compute-0 sudo[151471]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:31:10 compute-0 sudo[151471]: pam_unix(sudo:session): session closed for user root
Nov 29 06:31:10 compute-0 sudo[151496]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Nov 29 06:31:10 compute-0 sudo[151496]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:31:10 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:31:10 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:31:10 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:31:10.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:31:10 compute-0 python3.9[151468]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764397869.5081654-511-67590668071521/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 06:31:10 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 29 06:31:10 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:31:10 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 06:31:10 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:31:10 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 06:31:10 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 06:31:10 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:31:10 compute-0 podman[151631]: 2025-11-29 06:31:10.78668673 +0000 UTC m=+0.021728511 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:31:11 compute-0 podman[151631]: 2025-11-29 06:31:11.102089141 +0000 UTC m=+0.337130932 container create 3623a6d3c2dc320ae3f233a4811dbf73efeaef8cbe0c14d747d683e9a5801304 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_agnesi, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 06:31:11 compute-0 python3.9[151726]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:31:11 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:31:11 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:31:11 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:31:11.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:31:11 compute-0 systemd[1]: Started libpod-conmon-3623a6d3c2dc320ae3f233a4811dbf73efeaef8cbe0c14d747d683e9a5801304.scope.
Nov 29 06:31:11 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:31:11 compute-0 podman[151631]: 2025-11-29 06:31:11.612456105 +0000 UTC m=+0.847497886 container init 3623a6d3c2dc320ae3f233a4811dbf73efeaef8cbe0c14d747d683e9a5801304 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_agnesi, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 06:31:11 compute-0 podman[151631]: 2025-11-29 06:31:11.620466016 +0000 UTC m=+0.855507777 container start 3623a6d3c2dc320ae3f233a4811dbf73efeaef8cbe0c14d747d683e9a5801304 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_agnesi, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 06:31:11 compute-0 happy_agnesi[151776]: 167 167
Nov 29 06:31:11 compute-0 systemd[1]: libpod-3623a6d3c2dc320ae3f233a4811dbf73efeaef8cbe0c14d747d683e9a5801304.scope: Deactivated successfully.
Nov 29 06:31:11 compute-0 podman[151631]: 2025-11-29 06:31:11.692416553 +0000 UTC m=+0.927458324 container attach 3623a6d3c2dc320ae3f233a4811dbf73efeaef8cbe0c14d747d683e9a5801304 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_agnesi, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 06:31:11 compute-0 podman[151631]: 2025-11-29 06:31:11.693531874 +0000 UTC m=+0.928573625 container died 3623a6d3c2dc320ae3f233a4811dbf73efeaef8cbe0c14d747d683e9a5801304 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_agnesi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 06:31:11 compute-0 python3.9[151864]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764397870.7100148-511-24536040884562/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 06:31:12 compute-0 ceph-mon[74654]: pgmap v562: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 75 KiB/s rd, 0 B/s wr, 124 op/s
Nov 29 06:31:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-8f9db095f7d449b93b5322809ea8153c7d2b5937d13da36a1084930d1d373739-merged.mount: Deactivated successfully.
Nov 29 06:31:12 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v563: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 86 KiB/s rd, 0 B/s wr, 142 op/s
Nov 29 06:31:12 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:31:12 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:31:12 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:31:12.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:31:12 compute-0 python3.9[152015]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 06:31:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 06:31:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:31:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 06:31:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:31:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:31:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:31:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:31:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:31:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:31:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:31:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:31:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:31:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 06:31:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:31:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:31:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:31:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Nov 29 06:31:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:31:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 06:31:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:31:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:31:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:31:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 06:31:13 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:31:13 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:31:13 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:31:13 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:31:13.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:31:13 compute-0 sudo[152168]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bpaabaizmlkscmanccnknzgxzskrdpcc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397873.0854604-625-265603475845041/AnsiballZ_file.py'
Nov 29 06:31:13 compute-0 sudo[152168]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:31:13 compute-0 python3.9[152170]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 06:31:13 compute-0 sudo[152168]: pam_unix(sudo:session): session closed for user root
Nov 29 06:31:14 compute-0 sudo[152320]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fpjolkghnmvkemwzgysoqsufkrckewix ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397873.8851151-649-218359927084086/AnsiballZ_stat.py'
Nov 29 06:31:14 compute-0 sudo[152320]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:31:14 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v564: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 88 KiB/s rd, 0 B/s wr, 146 op/s
Nov 29 06:31:14 compute-0 python3.9[152322]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:31:14 compute-0 sudo[152320]: pam_unix(sudo:session): session closed for user root
Nov 29 06:31:14 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:31:14 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:31:14 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:31:14.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:31:14 compute-0 sudo[152398]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xiaojzgfvoohhvvuzfclkhqpqifhmnes ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397873.8851151-649-218359927084086/AnsiballZ_file.py'
Nov 29 06:31:14 compute-0 sudo[152398]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:31:14 compute-0 python3.9[152400]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 06:31:14 compute-0 sudo[152398]: pam_unix(sudo:session): session closed for user root
Nov 29 06:31:15 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:31:15 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:31:15 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:31:15.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:31:15 compute-0 sudo[152551]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-frpnlrxcgnjwbrkrhmprknhjhkbhaeym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397875.1749532-649-11179826701899/AnsiballZ_stat.py'
Nov 29 06:31:15 compute-0 sudo[152551]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:31:15 compute-0 python3.9[152553]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:31:15 compute-0 sudo[152551]: pam_unix(sudo:session): session closed for user root
Nov 29 06:31:16 compute-0 sudo[152631]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mueaivluaajmfbpixhomisgmozygukcn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397875.1749532-649-11179826701899/AnsiballZ_file.py'
Nov 29 06:31:16 compute-0 sudo[152631]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:31:16 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v565: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 54 KiB/s rd, 0 B/s wr, 90 op/s
Nov 29 06:31:16 compute-0 python3.9[152633]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 06:31:16 compute-0 sudo[152631]: pam_unix(sudo:session): session closed for user root
Nov 29 06:31:16 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:31:16 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 06:31:16 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:31:16.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 06:31:16 compute-0 sudo[152784]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kxwgwkcukwjtsmpmjkclrnslqhekpqlc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397876.5610266-718-185880977109007/AnsiballZ_file.py'
Nov 29 06:31:16 compute-0 sudo[152784]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:31:17 compute-0 python3.9[152786]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:31:17 compute-0 sudo[152784]: pam_unix(sudo:session): session closed for user root
Nov 29 06:31:17 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:31:17 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 06:31:17 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:31:17.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 06:31:17 compute-0 sshd-session[152560]: Received disconnect from 115.190.37.201 port 46892:11: Bye Bye [preauth]
Nov 29 06:31:17 compute-0 sshd-session[152560]: Disconnected from authenticating user root 115.190.37.201 port 46892 [preauth]
Nov 29 06:31:17 compute-0 sudo[152936]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjvohryaujmpkarrukothwtmddsuswvs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397877.4174612-742-78789863892111/AnsiballZ_stat.py'
Nov 29 06:31:17 compute-0 sudo[152936]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:31:18 compute-0 python3.9[152938]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:31:18 compute-0 sudo[152936]: pam_unix(sudo:session): session closed for user root
Nov 29 06:31:18 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v566: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 54 KiB/s rd, 0 B/s wr, 90 op/s
Nov 29 06:31:18 compute-0 sudo[153014]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fctjfrhnkoqbupnnwvwblbcjjreepqve ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397877.4174612-742-78789863892111/AnsiballZ_file.py'
Nov 29 06:31:18 compute-0 sudo[153014]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:31:18 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:31:18 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:31:18 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:31:18.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:31:18 compute-0 python3.9[153016]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:31:18 compute-0 sudo[153014]: pam_unix(sudo:session): session closed for user root
Nov 29 06:31:19 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:31:19 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:31:19 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:31:19.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:31:19 compute-0 sudo[153167]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iqxvcslnbxlsdjkidnhvhaullodaaife ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397879.0766788-778-104281645502746/AnsiballZ_stat.py'
Nov 29 06:31:19 compute-0 sudo[153167]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:31:19 compute-0 python3.9[153169]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:31:19 compute-0 sudo[153167]: pam_unix(sudo:session): session closed for user root
Nov 29 06:31:19 compute-0 sudo[153245]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bajjfpdwpzghzhuwgecqegmjjrrkmwgw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397879.0766788-778-104281645502746/AnsiballZ_file.py'
Nov 29 06:31:19 compute-0 sudo[153245]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:31:20 compute-0 python3.9[153247]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:31:20 compute-0 podman[151631]: 2025-11-29 06:31:20.231347801 +0000 UTC m=+9.466389592 container remove 3623a6d3c2dc320ae3f233a4811dbf73efeaef8cbe0c14d747d683e9a5801304 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_agnesi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 06:31:20 compute-0 sudo[153245]: pam_unix(sudo:session): session closed for user root
Nov 29 06:31:20 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:31:20 compute-0 ceph-mon[74654]: pgmap v563: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 86 KiB/s rd, 0 B/s wr, 142 op/s
Nov 29 06:31:20 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v567: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 26 KiB/s rd, 0 B/s wr, 44 op/s
Nov 29 06:31:20 compute-0 systemd[1]: libpod-conmon-3623a6d3c2dc320ae3f233a4811dbf73efeaef8cbe0c14d747d683e9a5801304.scope: Deactivated successfully.
Nov 29 06:31:20 compute-0 podman[153280]: 2025-11-29 06:31:20.405832882 +0000 UTC m=+0.029050000 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:31:20 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:31:20 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:31:20 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:31:20.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:31:20 compute-0 sudo[153419]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cdpnxygtorgimtsegdkbjehpmsqbhnmf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397880.4232147-814-167391533465972/AnsiballZ_systemd.py'
Nov 29 06:31:20 compute-0 sudo[153419]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:31:20 compute-0 podman[153280]: 2025-11-29 06:31:20.990522292 +0000 UTC m=+0.613739400 container create 39562b3b22c2eb6dd499946ad8077a06a36debee5eb254f67e45f8fbd119bc35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_colden, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 29 06:31:21 compute-0 python3.9[153421]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 06:31:21 compute-0 systemd[1]: Reloading.
Nov 29 06:31:21 compute-0 systemd-rc-local-generator[153444]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 06:31:21 compute-0 systemd-sysv-generator[153448]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 06:31:21 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:31:21 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:31:21 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:31:21.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:31:21 compute-0 systemd[1]: Started libpod-conmon-39562b3b22c2eb6dd499946ad8077a06a36debee5eb254f67e45f8fbd119bc35.scope.
Nov 29 06:31:21 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:31:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef6d6088165b79e8d7aa36d701b062e7c0291339d6e11548ffb4749cf82ab516/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 06:31:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef6d6088165b79e8d7aa36d701b062e7c0291339d6e11548ffb4749cf82ab516/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:31:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef6d6088165b79e8d7aa36d701b062e7c0291339d6e11548ffb4749cf82ab516/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:31:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef6d6088165b79e8d7aa36d701b062e7c0291339d6e11548ffb4749cf82ab516/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 06:31:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef6d6088165b79e8d7aa36d701b062e7c0291339d6e11548ffb4749cf82ab516/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 06:31:21 compute-0 sudo[153419]: pam_unix(sudo:session): session closed for user root
Nov 29 06:31:21 compute-0 ceph-mon[74654]: pgmap v564: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 88 KiB/s rd, 0 B/s wr, 146 op/s
Nov 29 06:31:21 compute-0 ceph-mon[74654]: pgmap v565: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 54 KiB/s rd, 0 B/s wr, 90 op/s
Nov 29 06:31:21 compute-0 ceph-mon[74654]: pgmap v566: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 54 KiB/s rd, 0 B/s wr, 90 op/s
Nov 29 06:31:21 compute-0 ceph-mon[74654]: pgmap v567: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 26 KiB/s rd, 0 B/s wr, 44 op/s
Nov 29 06:31:21 compute-0 podman[153280]: 2025-11-29 06:31:21.564571707 +0000 UTC m=+1.187788855 container init 39562b3b22c2eb6dd499946ad8077a06a36debee5eb254f67e45f8fbd119bc35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_colden, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 29 06:31:21 compute-0 podman[153280]: 2025-11-29 06:31:21.576708144 +0000 UTC m=+1.199925232 container start 39562b3b22c2eb6dd499946ad8077a06a36debee5eb254f67e45f8fbd119bc35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_colden, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 06:31:21 compute-0 podman[153280]: 2025-11-29 06:31:21.721306481 +0000 UTC m=+1.344523589 container attach 39562b3b22c2eb6dd499946ad8077a06a36debee5eb254f67e45f8fbd119bc35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_colden, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 29 06:31:22 compute-0 sudo[153616]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-csyrtlbvqwinwcatfmxwoxnhpjixjuay ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397881.8771765-838-201335488675702/AnsiballZ_stat.py'
Nov 29 06:31:22 compute-0 sudo[153616]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:31:22 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v568: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 19 KiB/s rd, 0 B/s wr, 32 op/s
Nov 29 06:31:22 compute-0 python3.9[153620]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:31:22 compute-0 stoic_colden[153461]: --> passed data devices: 0 physical, 1 LVM
Nov 29 06:31:22 compute-0 stoic_colden[153461]: --> relative data size: 1.0
Nov 29 06:31:22 compute-0 stoic_colden[153461]: --> All data devices are unavailable
Nov 29 06:31:22 compute-0 systemd[1]: libpod-39562b3b22c2eb6dd499946ad8077a06a36debee5eb254f67e45f8fbd119bc35.scope: Deactivated successfully.
Nov 29 06:31:22 compute-0 podman[153280]: 2025-11-29 06:31:22.478645398 +0000 UTC m=+2.101862536 container died 39562b3b22c2eb6dd499946ad8077a06a36debee5eb254f67e45f8fbd119bc35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_colden, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 29 06:31:22 compute-0 sudo[153616]: pam_unix(sudo:session): session closed for user root
Nov 29 06:31:22 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:31:22 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:31:22 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:31:22.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:31:22 compute-0 sudo[153716]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-swenuudqqtbwenlqyypvzikhzlbyhtgj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397881.8771765-838-201335488675702/AnsiballZ_file.py'
Nov 29 06:31:22 compute-0 sudo[153716]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:31:23 compute-0 python3.9[153718]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:31:23 compute-0 sudo[153716]: pam_unix(sudo:session): session closed for user root
Nov 29 06:31:23 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:31:23 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:31:23 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:31:23.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:31:23 compute-0 ceph-mon[74654]: pgmap v568: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 19 KiB/s rd, 0 B/s wr, 32 op/s
Nov 29 06:31:23 compute-0 sudo[153870]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vdswymqbwueahbjgpujwzomgebxhftaq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397883.3712325-874-249113451338245/AnsiballZ_stat.py'
Nov 29 06:31:23 compute-0 sudo[153870]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:31:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-ef6d6088165b79e8d7aa36d701b062e7c0291339d6e11548ffb4749cf82ab516-merged.mount: Deactivated successfully.
Nov 29 06:31:23 compute-0 sudo[153873]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:31:23 compute-0 sudo[153873]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:31:23 compute-0 sudo[153873]: pam_unix(sudo:session): session closed for user root
Nov 29 06:31:23 compute-0 sudo[153898]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:31:23 compute-0 sudo[153898]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:31:23 compute-0 sudo[153898]: pam_unix(sudo:session): session closed for user root
Nov 29 06:31:24 compute-0 python3.9[153872]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:31:24 compute-0 sudo[153870]: pam_unix(sudo:session): session closed for user root
Nov 29 06:31:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:31:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:31:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:31:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:31:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:31:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:31:24 compute-0 sudo[153998]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-phlolahsbljowuwmetgwnhognadhqwqy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397883.3712325-874-249113451338245/AnsiballZ_file.py'
Nov 29 06:31:24 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v569: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 8.2 KiB/s rd, 0 B/s wr, 13 op/s
Nov 29 06:31:24 compute-0 sudo[153998]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:31:24 compute-0 python3.9[154000]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:31:24 compute-0 sudo[153998]: pam_unix(sudo:session): session closed for user root
Nov 29 06:31:24 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:31:24 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:31:24 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:31:24.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:31:24 compute-0 podman[153280]: 2025-11-29 06:31:24.597792198 +0000 UTC m=+4.221009336 container remove 39562b3b22c2eb6dd499946ad8077a06a36debee5eb254f67e45f8fbd119bc35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_colden, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 29 06:31:24 compute-0 sudo[151496]: pam_unix(sudo:session): session closed for user root
Nov 29 06:31:24 compute-0 systemd[1]: libpod-conmon-39562b3b22c2eb6dd499946ad8077a06a36debee5eb254f67e45f8fbd119bc35.scope: Deactivated successfully.
Nov 29 06:31:24 compute-0 sudo[154033]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:31:24 compute-0 sudo[154033]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:31:24 compute-0 sudo[154033]: pam_unix(sudo:session): session closed for user root
Nov 29 06:31:24 compute-0 sudo[154081]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:31:24 compute-0 sudo[154081]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:31:24 compute-0 sudo[154081]: pam_unix(sudo:session): session closed for user root
Nov 29 06:31:24 compute-0 sudo[154128]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:31:24 compute-0 sudo[154128]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:31:24 compute-0 sudo[154128]: pam_unix(sudo:session): session closed for user root
Nov 29 06:31:24 compute-0 sudo[154175]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -- lvm list --format json
Nov 29 06:31:24 compute-0 sudo[154175]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:31:24 compute-0 sudo[154251]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dfblxdukirucoepqyklaccjzmodqmfcx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397884.6904404-910-175211279944177/AnsiballZ_systemd.py'
Nov 29 06:31:24 compute-0 sudo[154251]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:31:25 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:31:25 compute-0 python3.9[154253]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 06:31:25 compute-0 systemd[1]: Reloading.
Nov 29 06:31:25 compute-0 podman[154296]: 2025-11-29 06:31:25.327707614 +0000 UTC m=+0.090780863 container create 680eb57336c3889131324d547153d8ee5cfb552a2ce77ee178573b2dd87a571d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_jang, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 06:31:25 compute-0 podman[154296]: 2025-11-29 06:31:25.270395648 +0000 UTC m=+0.033468927 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:31:25 compute-0 systemd-rc-local-generator[154340]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 06:31:25 compute-0 systemd-sysv-generator[154343]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 06:31:25 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:31:25 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:31:25 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:31:25.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:31:25 compute-0 systemd[1]: Started libpod-conmon-680eb57336c3889131324d547153d8ee5cfb552a2ce77ee178573b2dd87a571d.scope.
Nov 29 06:31:25 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:31:25 compute-0 systemd[1]: Starting Create netns directory...
Nov 29 06:31:25 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 29 06:31:25 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 29 06:31:25 compute-0 systemd[1]: Finished Create netns directory.
Nov 29 06:31:25 compute-0 sudo[154251]: pam_unix(sudo:session): session closed for user root
Nov 29 06:31:25 compute-0 podman[154296]: 2025-11-29 06:31:25.886008079 +0000 UTC m=+0.649081358 container init 680eb57336c3889131324d547153d8ee5cfb552a2ce77ee178573b2dd87a571d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_jang, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 29 06:31:25 compute-0 podman[154296]: 2025-11-29 06:31:25.897579339 +0000 UTC m=+0.660652588 container start 680eb57336c3889131324d547153d8ee5cfb552a2ce77ee178573b2dd87a571d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_jang, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 06:31:25 compute-0 upbeat_jang[154349]: 167 167
Nov 29 06:31:25 compute-0 systemd[1]: libpod-680eb57336c3889131324d547153d8ee5cfb552a2ce77ee178573b2dd87a571d.scope: Deactivated successfully.
Nov 29 06:31:26 compute-0 ceph-mon[74654]: pgmap v569: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 8.2 KiB/s rd, 0 B/s wr, 13 op/s
Nov 29 06:31:26 compute-0 podman[154296]: 2025-11-29 06:31:26.069491197 +0000 UTC m=+0.832564536 container attach 680eb57336c3889131324d547153d8ee5cfb552a2ce77ee178573b2dd87a571d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_jang, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 29 06:31:26 compute-0 podman[154296]: 2025-11-29 06:31:26.071960887 +0000 UTC m=+0.835034256 container died 680eb57336c3889131324d547153d8ee5cfb552a2ce77ee178573b2dd87a571d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_jang, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 06:31:26 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v570: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:31:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-60d5b7ebad227636a4a0fee740c86fbc069291443b571cc160d105670453394c-merged.mount: Deactivated successfully.
Nov 29 06:31:26 compute-0 podman[154296]: 2025-11-29 06:31:26.410690626 +0000 UTC m=+1.173763875 container remove 680eb57336c3889131324d547153d8ee5cfb552a2ce77ee178573b2dd87a571d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_jang, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 06:31:26 compute-0 systemd[1]: libpod-conmon-680eb57336c3889131324d547153d8ee5cfb552a2ce77ee178573b2dd87a571d.scope: Deactivated successfully.
Nov 29 06:31:26 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:31:26 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:31:26 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:31:26.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:31:26 compute-0 podman[154404]: 2025-11-29 06:31:26.548630764 +0000 UTC m=+0.027587689 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:31:26 compute-0 podman[154404]: 2025-11-29 06:31:26.860547527 +0000 UTC m=+0.339504422 container create 42277cc83bf504185cea2cca0cef0ef10f623a83be4cef1e864e33a01d60307a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_ganguly, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 06:31:26 compute-0 systemd[1]: Started libpod-conmon-42277cc83bf504185cea2cca0cef0ef10f623a83be4cef1e864e33a01d60307a.scope.
Nov 29 06:31:26 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:31:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aee6555dfda219c288f07192279f706af6a84079b559e7f1a203787fd9b40310/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 06:31:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aee6555dfda219c288f07192279f706af6a84079b559e7f1a203787fd9b40310/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:31:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aee6555dfda219c288f07192279f706af6a84079b559e7f1a203787fd9b40310/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:31:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aee6555dfda219c288f07192279f706af6a84079b559e7f1a203787fd9b40310/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 06:31:26 compute-0 podman[154404]: 2025-11-29 06:31:26.975050436 +0000 UTC m=+0.454007361 container init 42277cc83bf504185cea2cca0cef0ef10f623a83be4cef1e864e33a01d60307a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_ganguly, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 29 06:31:26 compute-0 podman[154404]: 2025-11-29 06:31:26.985738431 +0000 UTC m=+0.464695326 container start 42277cc83bf504185cea2cca0cef0ef10f623a83be4cef1e864e33a01d60307a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_ganguly, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 06:31:26 compute-0 podman[154404]: 2025-11-29 06:31:26.991154125 +0000 UTC m=+0.470111020 container attach 42277cc83bf504185cea2cca0cef0ef10f623a83be4cef1e864e33a01d60307a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_ganguly, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 06:31:27 compute-0 sudo[154550]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ezydzejufvksvupelrbcqijafzdjqeuv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397886.7021995-940-268731645813027/AnsiballZ_file.py'
Nov 29 06:31:27 compute-0 sudo[154550]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:31:27 compute-0 ceph-mon[74654]: pgmap v570: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:31:27 compute-0 python3.9[154553]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 06:31:27 compute-0 sudo[154550]: pam_unix(sudo:session): session closed for user root
Nov 29 06:31:27 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:31:27 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:31:27 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:31:27.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:31:27 compute-0 sudo[154707]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dzmmnsghkaoqcrrmnbrflobdqivfuyin ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397887.394108-964-79036037087272/AnsiballZ_stat.py'
Nov 29 06:31:27 compute-0 sudo[154707]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:31:27 compute-0 intelligent_ganguly[154520]: {
Nov 29 06:31:27 compute-0 intelligent_ganguly[154520]:     "1": [
Nov 29 06:31:27 compute-0 intelligent_ganguly[154520]:         {
Nov 29 06:31:27 compute-0 intelligent_ganguly[154520]:             "devices": [
Nov 29 06:31:27 compute-0 intelligent_ganguly[154520]:                 "/dev/loop3"
Nov 29 06:31:27 compute-0 intelligent_ganguly[154520]:             ],
Nov 29 06:31:27 compute-0 intelligent_ganguly[154520]:             "lv_name": "ceph_lv0",
Nov 29 06:31:27 compute-0 intelligent_ganguly[154520]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 06:31:27 compute-0 intelligent_ganguly[154520]:             "lv_size": "7511998464",
Nov 29 06:31:27 compute-0 intelligent_ganguly[154520]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=336ec58c-893b-528f-a0c1-6ed1196bc047,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=91f280f1-e534-4adc-bf70-98711580c2dd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 06:31:27 compute-0 intelligent_ganguly[154520]:             "lv_uuid": "G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP",
Nov 29 06:31:27 compute-0 intelligent_ganguly[154520]:             "name": "ceph_lv0",
Nov 29 06:31:27 compute-0 intelligent_ganguly[154520]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 06:31:27 compute-0 intelligent_ganguly[154520]:             "tags": {
Nov 29 06:31:27 compute-0 intelligent_ganguly[154520]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 06:31:27 compute-0 intelligent_ganguly[154520]:                 "ceph.block_uuid": "G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP",
Nov 29 06:31:27 compute-0 intelligent_ganguly[154520]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 06:31:27 compute-0 intelligent_ganguly[154520]:                 "ceph.cluster_fsid": "336ec58c-893b-528f-a0c1-6ed1196bc047",
Nov 29 06:31:27 compute-0 intelligent_ganguly[154520]:                 "ceph.cluster_name": "ceph",
Nov 29 06:31:27 compute-0 intelligent_ganguly[154520]:                 "ceph.crush_device_class": "",
Nov 29 06:31:27 compute-0 intelligent_ganguly[154520]:                 "ceph.encrypted": "0",
Nov 29 06:31:27 compute-0 intelligent_ganguly[154520]:                 "ceph.osd_fsid": "91f280f1-e534-4adc-bf70-98711580c2dd",
Nov 29 06:31:27 compute-0 intelligent_ganguly[154520]:                 "ceph.osd_id": "1",
Nov 29 06:31:27 compute-0 intelligent_ganguly[154520]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 06:31:27 compute-0 intelligent_ganguly[154520]:                 "ceph.type": "block",
Nov 29 06:31:27 compute-0 intelligent_ganguly[154520]:                 "ceph.vdo": "0"
Nov 29 06:31:27 compute-0 intelligent_ganguly[154520]:             },
Nov 29 06:31:27 compute-0 intelligent_ganguly[154520]:             "type": "block",
Nov 29 06:31:27 compute-0 intelligent_ganguly[154520]:             "vg_name": "ceph_vg0"
Nov 29 06:31:27 compute-0 intelligent_ganguly[154520]:         }
Nov 29 06:31:27 compute-0 intelligent_ganguly[154520]:     ]
Nov 29 06:31:27 compute-0 intelligent_ganguly[154520]: }
Nov 29 06:31:27 compute-0 systemd[1]: libpod-42277cc83bf504185cea2cca0cef0ef10f623a83be4cef1e864e33a01d60307a.scope: Deactivated successfully.
Nov 29 06:31:27 compute-0 podman[154404]: 2025-11-29 06:31:27.748190284 +0000 UTC m=+1.227147179 container died 42277cc83bf504185cea2cca0cef0ef10f623a83be4cef1e864e33a01d60307a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_ganguly, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507)
Nov 29 06:31:27 compute-0 sshd-session[154578]: Invalid user mysql from 79.116.35.29 port 43468
Nov 29 06:31:28 compute-0 sshd-session[154578]: Received disconnect from 79.116.35.29 port 43468:11: Bye Bye [preauth]
Nov 29 06:31:28 compute-0 sshd-session[154578]: Disconnected from invalid user mysql 79.116.35.29 port 43468 [preauth]
Nov 29 06:31:28 compute-0 python3.9[154711]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:31:28 compute-0 sudo[154707]: pam_unix(sudo:session): session closed for user root
Nov 29 06:31:28 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v571: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:31:28 compute-0 sudo[154845]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zayrdtjtcdoqxnmwxtcwytyykqgmafcm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397887.394108-964-79036037087272/AnsiballZ_copy.py'
Nov 29 06:31:28 compute-0 sudo[154845]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:31:28 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:31:28 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:31:28 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:31:28.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:31:28 compute-0 python3.9[154847]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764397887.394108-964-79036037087272/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 29 06:31:28 compute-0 sudo[154845]: pam_unix(sudo:session): session closed for user root
Nov 29 06:31:29 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:31:29 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:31:29 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:31:29.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:31:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 06:31:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 06:31:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 06:31:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 06:31:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 06:31:29 compute-0 sudo[154999]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bpydigctqbvgcgwthnmlvpxzvxjlzoan ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397889.201589-1015-7943009000239/AnsiballZ_file.py'
Nov 29 06:31:29 compute-0 sudo[154999]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:31:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 06:31:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 06:31:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 06:31:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 06:31:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 06:31:29 compute-0 sshd-session[154833]: Invalid user usuario1 from 104.208.108.166 port 41666
Nov 29 06:31:29 compute-0 python3.9[155001]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 06:31:29 compute-0 sudo[154999]: pam_unix(sudo:session): session closed for user root
Nov 29 06:31:29 compute-0 sshd-session[154833]: Received disconnect from 104.208.108.166 port 41666:11: Bye Bye [preauth]
Nov 29 06:31:29 compute-0 sshd-session[154833]: Disconnected from invalid user usuario1 104.208.108.166 port 41666 [preauth]
Nov 29 06:31:29 compute-0 ceph-mon[74654]: pgmap v571: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:31:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-aee6555dfda219c288f07192279f706af6a84079b559e7f1a203787fd9b40310-merged.mount: Deactivated successfully.
Nov 29 06:31:30 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v572: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:31:30 compute-0 sudo[155153]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lsoxafdvgwvkxraapkyugtrhczccvgca ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397889.9975817-1039-22560831867865/AnsiballZ_stat.py'
Nov 29 06:31:30 compute-0 sudo[155153]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:31:30 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:31:30 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:31:30 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:31:30.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:31:30 compute-0 python3.9[155155]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:31:30 compute-0 sudo[155153]: pam_unix(sudo:session): session closed for user root
Nov 29 06:31:31 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:31:31 compute-0 ceph-mon[74654]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #24. Immutable memtables: 0.
Nov 29 06:31:31 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:31:31.200151) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 06:31:31 compute-0 ceph-mon[74654]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 24
Nov 29 06:31:31 compute-0 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764397891200201, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 1245, "num_deletes": 252, "total_data_size": 2134309, "memory_usage": 2167040, "flush_reason": "Manual Compaction"}
Nov 29 06:31:31 compute-0 ceph-mon[74654]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #25: started
Nov 29 06:31:31 compute-0 sudo[155277]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-njzpkixcpdhvlhruvspiusbpnqikmxak ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397889.9975817-1039-22560831867865/AnsiballZ_copy.py'
Nov 29 06:31:31 compute-0 sudo[155277]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:31:31 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:31:31 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:31:31 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:31:31.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:31:31 compute-0 python3.9[155279]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764397889.9975817-1039-22560831867865/.source.json _original_basename=.onaplbc5 follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:31:31 compute-0 sudo[155277]: pam_unix(sudo:session): session closed for user root
Nov 29 06:31:31 compute-0 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764397891843278, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 25, "file_size": 2087601, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 9707, "largest_seqno": 10951, "table_properties": {"data_size": 2081808, "index_size": 3124, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 12444, "raw_average_key_size": 19, "raw_value_size": 2069865, "raw_average_value_size": 3254, "num_data_blocks": 144, "num_entries": 636, "num_filter_entries": 636, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764397736, "oldest_key_time": 1764397736, "file_creation_time": 1764397891, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cb6c8f8f-b3b4-4901-9b8e-6f9d7b0da908", "db_session_id": "VL4WOW4AK06DDHF5VQBP", "orig_file_number": 25, "seqno_to_time_mapping": "N/A"}}
Nov 29 06:31:31 compute-0 ceph-mon[74654]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 643164 microseconds, and 5130 cpu microseconds.
Nov 29 06:31:31 compute-0 ceph-mon[74654]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 06:31:31 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:31:31.843317) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #25: 2087601 bytes OK
Nov 29 06:31:31 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:31:31.843334) [db/memtable_list.cc:519] [default] Level-0 commit table #25 started
Nov 29 06:31:31 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:31:31.899199) [db/memtable_list.cc:722] [default] Level-0 commit table #25: memtable #1 done
Nov 29 06:31:31 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:31:31.899283) EVENT_LOG_v1 {"time_micros": 1764397891899268, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 06:31:31 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:31:31.899319) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 06:31:31 compute-0 ceph-mon[74654]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 2128822, prev total WAL file size 2160595, number of live WAL files 2.
Nov 29 06:31:31 compute-0 ceph-mon[74654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000021.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 06:31:31 compute-0 ceph-mon[74654]: pgmap v572: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:31:31 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:31:31.900677) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Nov 29 06:31:31 compute-0 ceph-mon[74654]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 06:31:31 compute-0 ceph-mon[74654]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [25(2038KB)], [23(9227KB)]
Nov 29 06:31:31 compute-0 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764397891900749, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [25], "files_L6": [23], "score": -1, "input_data_size": 11536572, "oldest_snapshot_seqno": -1}
Nov 29 06:31:31 compute-0 podman[154404]: 2025-11-29 06:31:31.928022495 +0000 UTC m=+5.406979380 container remove 42277cc83bf504185cea2cca0cef0ef10f623a83be4cef1e864e33a01d60307a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_ganguly, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 29 06:31:31 compute-0 systemd[1]: libpod-conmon-42277cc83bf504185cea2cca0cef0ef10f623a83be4cef1e864e33a01d60307a.scope: Deactivated successfully.
Nov 29 06:31:31 compute-0 sudo[154175]: pam_unix(sudo:session): session closed for user root
Nov 29 06:31:32 compute-0 sudo[155357]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:31:32 compute-0 sudo[155357]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:31:32 compute-0 sudo[155357]: pam_unix(sudo:session): session closed for user root
Nov 29 06:31:32 compute-0 sudo[155404]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:31:32 compute-0 sudo[155404]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:31:32 compute-0 sudo[155404]: pam_unix(sudo:session): session closed for user root
Nov 29 06:31:32 compute-0 sudo[155453]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:31:32 compute-0 sudo[155502]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wcyzldbpmwkjtaygebjzcpnppvaewtqk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397891.8777854-1084-49476298592259/AnsiballZ_file.py'
Nov 29 06:31:32 compute-0 sudo[155453]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:31:32 compute-0 sudo[155502]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:31:32 compute-0 sudo[155453]: pam_unix(sudo:session): session closed for user root
Nov 29 06:31:32 compute-0 sudo[155507]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -- raw list --format json
Nov 29 06:31:32 compute-0 sudo[155507]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:31:32 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v573: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:31:32 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:31:32 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:31:32 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:31:32.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:31:32 compute-0 podman[155572]: 2025-11-29 06:31:32.566029647 +0000 UTC m=+0.037649896 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:31:32 compute-0 python3.9[155506]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:31:32 compute-0 sudo[155502]: pam_unix(sudo:session): session closed for user root
Nov 29 06:31:33 compute-0 ceph-mon[74654]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #26: 3999 keys, 9547129 bytes, temperature: kUnknown
Nov 29 06:31:33 compute-0 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764397893301873, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 26, "file_size": 9547129, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9515660, "index_size": 20351, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10053, "raw_key_size": 98538, "raw_average_key_size": 24, "raw_value_size": 9438540, "raw_average_value_size": 2360, "num_data_blocks": 889, "num_entries": 3999, "num_filter_entries": 3999, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764396963, "oldest_key_time": 0, "file_creation_time": 1764397891, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cb6c8f8f-b3b4-4901-9b8e-6f9d7b0da908", "db_session_id": "VL4WOW4AK06DDHF5VQBP", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Nov 29 06:31:33 compute-0 ceph-mon[74654]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 06:31:33 compute-0 sudo[155736]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fzeecjqitufyrdhaqjxynpykrankgzzj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397893.0684757-1108-261268030316340/AnsiballZ_stat.py'
Nov 29 06:31:33 compute-0 sudo[155736]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:31:33 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:31:33.302304) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 9547129 bytes
Nov 29 06:31:33 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:31:33.407715) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 8.2 rd, 6.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 9.0 +0.0 blob) out(9.1 +0.0 blob), read-write-amplify(10.1) write-amplify(4.6) OK, records in: 4519, records dropped: 520 output_compression: NoCompression
Nov 29 06:31:33 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:31:33.407760) EVENT_LOG_v1 {"time_micros": 1764397893407741, "job": 8, "event": "compaction_finished", "compaction_time_micros": 1401367, "compaction_time_cpu_micros": 36236, "output_level": 6, "num_output_files": 1, "total_output_size": 9547129, "num_input_records": 4519, "num_output_records": 3999, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 06:31:33 compute-0 ceph-mon[74654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000025.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 06:31:33 compute-0 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764397893408274, "job": 8, "event": "table_file_deletion", "file_number": 25}
Nov 29 06:31:33 compute-0 ceph-mon[74654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 06:31:33 compute-0 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764397893409691, "job": 8, "event": "table_file_deletion", "file_number": 23}
Nov 29 06:31:33 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:31:31.900394) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 06:31:33 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:31:33.409791) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 06:31:33 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:31:33.409797) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 06:31:33 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:31:33.409799) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 06:31:33 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:31:33.409800) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 06:31:33 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:31:33.409802) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 06:31:33 compute-0 podman[155572]: 2025-11-29 06:31:33.409805151 +0000 UTC m=+0.881425370 container create 9f9c572a122a1641e5c497327020e581c8414faafff8afdc9286c6a3e8c1f953 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_khorana, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 06:31:33 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:31:33 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:31:33 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:31:33.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:31:33 compute-0 sudo[155736]: pam_unix(sudo:session): session closed for user root
Nov 29 06:31:33 compute-0 systemd[1]: Started libpod-conmon-9f9c572a122a1641e5c497327020e581c8414faafff8afdc9286c6a3e8c1f953.scope.
Nov 29 06:31:33 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:31:33 compute-0 sudo[155864]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zjbaolxdkyalhcybwpuxrlnwpoardbie ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397893.0684757-1108-261268030316340/AnsiballZ_copy.py'
Nov 29 06:31:33 compute-0 sudo[155864]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:31:34 compute-0 sudo[155864]: pam_unix(sudo:session): session closed for user root
Nov 29 06:31:34 compute-0 podman[155572]: 2025-11-29 06:31:34.093855298 +0000 UTC m=+1.565475547 container init 9f9c572a122a1641e5c497327020e581c8414faafff8afdc9286c6a3e8c1f953 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_khorana, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 06:31:34 compute-0 podman[155572]: 2025-11-29 06:31:34.103489553 +0000 UTC m=+1.575109772 container start 9f9c572a122a1641e5c497327020e581c8414faafff8afdc9286c6a3e8c1f953 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_khorana, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 06:31:34 compute-0 heuristic_khorana[155791]: 167 167
Nov 29 06:31:34 compute-0 systemd[1]: libpod-9f9c572a122a1641e5c497327020e581c8414faafff8afdc9286c6a3e8c1f953.scope: Deactivated successfully.
Nov 29 06:31:34 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v574: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:31:34 compute-0 podman[155572]: 2025-11-29 06:31:34.50309617 +0000 UTC m=+1.974716409 container attach 9f9c572a122a1641e5c497327020e581c8414faafff8afdc9286c6a3e8c1f953 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_khorana, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 29 06:31:34 compute-0 podman[155572]: 2025-11-29 06:31:34.504010116 +0000 UTC m=+1.975630345 container died 9f9c572a122a1641e5c497327020e581c8414faafff8afdc9286c6a3e8c1f953 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_khorana, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 06:31:34 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:31:34 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:31:34 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:31:34.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:31:35 compute-0 sudo[156030]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vynlcymgxsoxhfnfdogjhnpvbspfkabk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397894.606399-1159-254280056545203/AnsiballZ_container_config_data.py'
Nov 29 06:31:35 compute-0 ceph-mon[74654]: pgmap v573: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:31:35 compute-0 sudo[156030]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:31:35 compute-0 python3.9[156032]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Nov 29 06:31:35 compute-0 sudo[156030]: pam_unix(sudo:session): session closed for user root
Nov 29 06:31:35 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:31:35 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:31:35 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:31:35.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:31:36 compute-0 sudo[156183]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hueztemmejrctpbizwjfmiwgvulghvkg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397895.6492224-1186-105533627914074/AnsiballZ_container_config_hash.py'
Nov 29 06:31:36 compute-0 sudo[156183]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:31:36 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v575: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:31:36 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:31:36 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:31:36 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:31:36.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:31:37 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:31:37 compute-0 python3.9[156185]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 29 06:31:37 compute-0 sudo[156183]: pam_unix(sudo:session): session closed for user root
Nov 29 06:31:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-1fce32a3a945796542af4a5f507bb13a77bc014e453508e0ac4ef6e7629cabc8-merged.mount: Deactivated successfully.
Nov 29 06:31:37 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:31:37 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:31:37 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:31:37.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:31:37 compute-0 ceph-mon[74654]: pgmap v574: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:31:37 compute-0 ceph-mon[74654]: pgmap v575: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:31:37 compute-0 sudo[156336]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ynywwysdiivsjgouyvnqznhaezuuevyt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397897.4861658-1213-248015088669184/AnsiballZ_podman_container_info.py'
Nov 29 06:31:37 compute-0 sudo[156336]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:31:38 compute-0 python3.9[156338]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Nov 29 06:31:38 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v576: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:31:38 compute-0 podman[155572]: 2025-11-29 06:31:38.346368943 +0000 UTC m=+5.817989162 container remove 9f9c572a122a1641e5c497327020e581c8414faafff8afdc9286c6a3e8c1f953 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_khorana, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 06:31:38 compute-0 systemd[1]: libpod-conmon-9f9c572a122a1641e5c497327020e581c8414faafff8afdc9286c6a3e8c1f953.scope: Deactivated successfully.
Nov 29 06:31:38 compute-0 sudo[156336]: pam_unix(sudo:session): session closed for user root
Nov 29 06:31:38 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:31:38 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:31:38 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:31:38.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:31:38 compute-0 podman[156372]: 2025-11-29 06:31:38.496691895 +0000 UTC m=+0.025842399 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:31:38 compute-0 podman[156372]: 2025-11-29 06:31:38.672160464 +0000 UTC m=+0.201310938 container create 237684982b117cfe47d703d903fe043dd5f943ba250e15ba75e4118f0ada42c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_dijkstra, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 29 06:31:38 compute-0 systemd[1]: Started libpod-conmon-237684982b117cfe47d703d903fe043dd5f943ba250e15ba75e4118f0ada42c8.scope.
Nov 29 06:31:38 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:31:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f60d8796ab8b37aeb919f432016d4a7706ddb9a39c62c7a54bb2d8598edddec5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 06:31:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f60d8796ab8b37aeb919f432016d4a7706ddb9a39c62c7a54bb2d8598edddec5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:31:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f60d8796ab8b37aeb919f432016d4a7706ddb9a39c62c7a54bb2d8598edddec5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:31:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f60d8796ab8b37aeb919f432016d4a7706ddb9a39c62c7a54bb2d8598edddec5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 06:31:39 compute-0 podman[156372]: 2025-11-29 06:31:39.296076993 +0000 UTC m=+0.825227487 container init 237684982b117cfe47d703d903fe043dd5f943ba250e15ba75e4118f0ada42c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_dijkstra, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Nov 29 06:31:39 compute-0 podman[156372]: 2025-11-29 06:31:39.30435903 +0000 UTC m=+0.833509504 container start 237684982b117cfe47d703d903fe043dd5f943ba250e15ba75e4118f0ada42c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_dijkstra, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 06:31:39 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:31:39 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:31:39 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:31:39.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:31:39 compute-0 sudo[156544]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mpyyiclbnrudprptdlmlamihaobggqph ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764397899.2925825-1252-204441832181174/AnsiballZ_edpm_container_manage.py'
Nov 29 06:31:39 compute-0 sudo[156544]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:31:40 compute-0 epic_dijkstra[156412]: {
Nov 29 06:31:40 compute-0 epic_dijkstra[156412]:     "91f280f1-e534-4adc-bf70-98711580c2dd": {
Nov 29 06:31:40 compute-0 epic_dijkstra[156412]:         "ceph_fsid": "336ec58c-893b-528f-a0c1-6ed1196bc047",
Nov 29 06:31:40 compute-0 epic_dijkstra[156412]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 06:31:40 compute-0 epic_dijkstra[156412]:         "osd_id": 1,
Nov 29 06:31:40 compute-0 epic_dijkstra[156412]:         "osd_uuid": "91f280f1-e534-4adc-bf70-98711580c2dd",
Nov 29 06:31:40 compute-0 epic_dijkstra[156412]:         "type": "bluestore"
Nov 29 06:31:40 compute-0 epic_dijkstra[156412]:     }
Nov 29 06:31:40 compute-0 epic_dijkstra[156412]: }
Nov 29 06:31:40 compute-0 systemd[1]: libpod-237684982b117cfe47d703d903fe043dd5f943ba250e15ba75e4118f0ada42c8.scope: Deactivated successfully.
Nov 29 06:31:40 compute-0 ceph-mon[74654]: pgmap v576: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:31:40 compute-0 python3[156548]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Nov 29 06:31:40 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v577: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:31:40 compute-0 podman[156372]: 2025-11-29 06:31:40.523685844 +0000 UTC m=+2.052836358 container attach 237684982b117cfe47d703d903fe043dd5f943ba250e15ba75e4118f0ada42c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_dijkstra, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 06:31:40 compute-0 podman[156372]: 2025-11-29 06:31:40.524646381 +0000 UTC m=+2.053796885 container died 237684982b117cfe47d703d903fe043dd5f943ba250e15ba75e4118f0ada42c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_dijkstra, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 06:31:40 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:31:40 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:31:40 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:31:40.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:31:41 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:31:41 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:31:41 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:31:41.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:31:41 compute-0 ceph-mon[74654]: pgmap v577: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:31:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-f60d8796ab8b37aeb919f432016d4a7706ddb9a39c62c7a54bb2d8598edddec5-merged.mount: Deactivated successfully.
Nov 29 06:31:42 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:31:42 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v578: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:31:42 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:31:42 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:31:42 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:31:42.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:31:43 compute-0 podman[156372]: 2025-11-29 06:31:43.332010196 +0000 UTC m=+4.861160670 container remove 237684982b117cfe47d703d903fe043dd5f943ba250e15ba75e4118f0ada42c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_dijkstra, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 29 06:31:43 compute-0 systemd[1]: libpod-conmon-237684982b117cfe47d703d903fe043dd5f943ba250e15ba75e4118f0ada42c8.scope: Deactivated successfully.
Nov 29 06:31:43 compute-0 sudo[155507]: pam_unix(sudo:session): session closed for user root
Nov 29 06:31:43 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 06:31:43 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:31:43 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:31:43 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:31:43.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:31:43 compute-0 podman[156588]: 2025-11-29 06:31:43.535736242 +0000 UTC m=+2.491570222 container health_status b3f42e9a710907b47913576d27471d163da731262c1464357cff24681ce600c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 06:31:43 compute-0 sshd-session[156615]: Invalid user testing from 176.109.67.96 port 39158
Nov 29 06:31:43 compute-0 sshd-session[156615]: Received disconnect from 176.109.67.96 port 39158:11: Bye Bye [preauth]
Nov 29 06:31:43 compute-0 sshd-session[156615]: Disconnected from invalid user testing 176.109.67.96 port 39158 [preauth]
Nov 29 06:31:44 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:31:44 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 06:31:44 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v579: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:31:44 compute-0 ceph-mon[74654]: pgmap v578: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:31:44 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:31:44 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:31:44 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:31:44.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:31:45 compute-0 sshd-session[156637]: Invalid user glenn from 197.13.24.157 port 38404
Nov 29 06:31:45 compute-0 sshd-session[156637]: Received disconnect from 197.13.24.157 port 38404:11: Bye Bye [preauth]
Nov 29 06:31:45 compute-0 sshd-session[156637]: Disconnected from invalid user glenn 197.13.24.157 port 38404 [preauth]
Nov 29 06:31:45 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:31:45 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:31:45 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:31:45.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:31:45 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:31:45 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev 7612b81e-1244-4234-b8e7-e0ce3293afcb does not exist
Nov 29 06:31:45 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev 9229c554-b976-443b-8b0d-52d2bfe95898 does not exist
Nov 29 06:31:45 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev 837caff3-9e84-45ee-af54-03e8e27bb7f6 does not exist
Nov 29 06:31:45 compute-0 sudo[156654]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:31:45 compute-0 sudo[156652]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:31:45 compute-0 sudo[156654]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:31:45 compute-0 sudo[156652]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:31:45 compute-0 sudo[156654]: pam_unix(sudo:session): session closed for user root
Nov 29 06:31:45 compute-0 sudo[156652]: pam_unix(sudo:session): session closed for user root
Nov 29 06:31:45 compute-0 sudo[156702]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 06:31:45 compute-0 sudo[156702]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:31:45 compute-0 sudo[156702]: pam_unix(sudo:session): session closed for user root
Nov 29 06:31:45 compute-0 sudo[156703]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:31:45 compute-0 sudo[156703]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:31:45 compute-0 sudo[156703]: pam_unix(sudo:session): session closed for user root
Nov 29 06:31:46 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v580: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:31:46 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:31:46 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:31:46 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:31:46.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:31:46 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:31:46 compute-0 ceph-mon[74654]: pgmap v579: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:31:47 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:31:47 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:31:47 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:31:47 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:31:47.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:31:48 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v581: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:31:48 compute-0 sshd-session[156757]: Invalid user zhangsan from 31.6.212.12 port 41754
Nov 29 06:31:48 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:31:48 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:31:48 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:31:48.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:31:48 compute-0 sshd-session[156757]: Received disconnect from 31.6.212.12 port 41754:11: Bye Bye [preauth]
Nov 29 06:31:48 compute-0 sshd-session[156757]: Disconnected from invalid user zhangsan 31.6.212.12 port 41754 [preauth]
Nov 29 06:31:49 compute-0 sshd-session[156775]: Invalid user javad from 138.124.186.225 port 54168
Nov 29 06:31:49 compute-0 sshd-session[156775]: Received disconnect from 138.124.186.225 port 54168:11: Bye Bye [preauth]
Nov 29 06:31:49 compute-0 sshd-session[156775]: Disconnected from invalid user javad 138.124.186.225 port 54168 [preauth]
Nov 29 06:31:49 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:31:49 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:31:49 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:31:49.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:31:49 compute-0 sshd-session[156416]: error: kex_exchange_identification: read: Connection timed out
Nov 29 06:31:49 compute-0 sshd-session[156416]: banner exchange: Connection from 58.210.98.130 port 6184: Connection timed out
Nov 29 06:31:49 compute-0 sshd-session[156778]: Received disconnect from 162.214.92.14 port 60674:11: Bye Bye [preauth]
Nov 29 06:31:49 compute-0 sshd-session[156778]: Disconnected from authenticating user root 162.214.92.14 port 60674 [preauth]
Nov 29 06:31:50 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v582: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:31:50 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:31:50 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:31:50 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:31:50.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:31:51 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:31:51 compute-0 ceph-mon[74654]: pgmap v580: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:31:51 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:31:51 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:31:51 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:31:51.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:31:52 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:31:52 compute-0 ceph-mon[74654]: pgmap v581: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:31:52 compute-0 ceph-mon[74654]: pgmap v582: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:31:52 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v583: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:31:52 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:31:52 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:31:52 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:31:52.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:31:53 compute-0 ceph-mon[74654]: pgmap v583: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:31:53 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:31:53 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:31:53 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:31:53.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:31:54 compute-0 ceph-mgr[74948]: [balancer INFO root] Optimize plan auto_2025-11-29_06:31:54
Nov 29 06:31:54 compute-0 ceph-mgr[74948]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 06:31:54 compute-0 ceph-mgr[74948]: [balancer INFO root] do_upmap
Nov 29 06:31:54 compute-0 ceph-mgr[74948]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.control', 'images', '.mgr', 'backups', 'default.rgw.meta', 'cephfs.cephfs.meta', 'vms', '.rgw.root', 'volumes']
Nov 29 06:31:54 compute-0 ceph-mgr[74948]: [balancer INFO root] prepared 0/10 changes
Nov 29 06:31:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:31:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:31:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:31:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:31:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:31:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:31:54 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v584: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:31:54 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:31:54 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:31:54 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:31:54.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:31:55 compute-0 ceph-mgr[74948]: client.0 ms_handle_reset on v2:192.168.122.100:6800/1221624088
Nov 29 06:31:55 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:31:55 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:31:55 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:31:55.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:31:56 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v585: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:31:56 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:31:56 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:31:56 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:31:56.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:31:57 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:31:57 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:31:57 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:31:57 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:31:57.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:31:57 compute-0 ceph-mon[74654]: pgmap v584: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:31:58 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v586: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:31:58 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:31:58 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:31:58 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:31:58.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:31:59 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:31:59 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:31:59 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:31:59.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:32:00 compute-0 ceph-mon[74654]: pgmap v585: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:32:00 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v587: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:32:00 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:32:00 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:32:00 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:32:00.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:32:01 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:32:01 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:32:01 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:32:01.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:32:02 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:32:02 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v588: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:32:02 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:32:02 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:32:02 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:32:02.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:32:03 compute-0 ceph-mon[74654]: pgmap v586: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:32:03 compute-0 ceph-mon[74654]: pgmap v587: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:32:03 compute-0 sshd-session[156835]: Invalid user testing from 49.247.35.31 port 62582
Nov 29 06:32:03 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:32:03 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:32:03 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:32:03.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:32:03 compute-0 sshd-session[156835]: Received disconnect from 49.247.35.31 port 62582:11: Bye Bye [preauth]
Nov 29 06:32:03 compute-0 sshd-session[156835]: Disconnected from invalid user testing 49.247.35.31 port 62582 [preauth]
Nov 29 06:32:04 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v589: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:32:04 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:32:04 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:32:04 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:32:04.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:32:05 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:32:05 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:32:05 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:32:05.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:32:05 compute-0 ceph-mon[74654]: pgmap v588: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:32:05 compute-0 sudo[156839]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:32:05 compute-0 sudo[156839]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:32:05 compute-0 sudo[156839]: pam_unix(sudo:session): session closed for user root
Nov 29 06:32:05 compute-0 sudo[156866]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:32:05 compute-0 sudo[156866]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:32:05 compute-0 sudo[156866]: pam_unix(sudo:session): session closed for user root
Nov 29 06:32:06 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v590: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:32:06 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:32:06 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:32:06 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:32:06.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:32:06 compute-0 podman[156602]: 2025-11-29 06:32:06.767637959 +0000 UTC m=+25.200862531 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 06:32:06 compute-0 podman[156913]: 2025-11-29 06:32:06.886737429 +0000 UTC m=+0.023188293 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 06:32:07 compute-0 sshd-session[156859]: Invalid user user5 from 118.193.39.127 port 45256
Nov 29 06:32:07 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:32:07 compute-0 sshd-session[156859]: Received disconnect from 118.193.39.127 port 45256:11: Bye Bye [preauth]
Nov 29 06:32:07 compute-0 sshd-session[156859]: Disconnected from invalid user user5 118.193.39.127 port 45256 [preauth]
Nov 29 06:32:07 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:32:07 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:32:07 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:32:07.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:32:07 compute-0 ceph-mon[74654]: pgmap v589: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:32:08 compute-0 podman[156913]: 2025-11-29 06:32:08.00396598 +0000 UTC m=+1.140416794 container create 81ea2bcb89266a0110a379c2083d8cc042460d4a35c8ed3bf349dd1083925000 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2)
Nov 29 06:32:08 compute-0 python3[156548]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 06:32:08 compute-0 sudo[156544]: pam_unix(sudo:session): session closed for user root
Nov 29 06:32:08 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v591: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:32:08 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:32:08 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:32:08 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:32:08.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:32:08 compute-0 sudo[157104]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qyxxnepjvrboejcgnpiyvmgtuvwdtdbu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397928.3692038-1276-277144817771601/AnsiballZ_stat.py'
Nov 29 06:32:08 compute-0 sudo[157104]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:32:08 compute-0 ceph-mon[74654]: pgmap v590: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:32:08 compute-0 python3.9[157106]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 06:32:08 compute-0 sudo[157104]: pam_unix(sudo:session): session closed for user root
Nov 29 06:32:09 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:32:09 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:32:09 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:32:09.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:32:09 compute-0 sudo[157259]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zuofdmwrryjbbiwopbfwrukrvuiorhpw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397929.1723874-1303-96050390542444/AnsiballZ_file.py'
Nov 29 06:32:09 compute-0 sudo[157259]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:32:09 compute-0 python3.9[157261]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:32:09 compute-0 sudo[157259]: pam_unix(sudo:session): session closed for user root
Nov 29 06:32:09 compute-0 sshd-session[157052]: Invalid user cumulus from 34.92.81.41 port 56240
Nov 29 06:32:09 compute-0 sudo[157335]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-owlzmhogzynfmixpwwtfzbvukudzocuz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397929.1723874-1303-96050390542444/AnsiballZ_stat.py'
Nov 29 06:32:09 compute-0 sudo[157335]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:32:10 compute-0 sshd-session[157052]: Received disconnect from 34.92.81.41 port 56240:11: Bye Bye [preauth]
Nov 29 06:32:10 compute-0 sshd-session[157052]: Disconnected from invalid user cumulus 34.92.81.41 port 56240 [preauth]
Nov 29 06:32:10 compute-0 python3.9[157337]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 06:32:10 compute-0 sudo[157335]: pam_unix(sudo:session): session closed for user root
Nov 29 06:32:10 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v592: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:32:10 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:32:10 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:32:10 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:32:10.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:32:10 compute-0 ceph-mon[74654]: pgmap v591: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:32:10 compute-0 sudo[157486]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bupfoyaniciotbqkpcuvowyleaekmxqp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397930.286957-1303-58847719296659/AnsiballZ_copy.py'
Nov 29 06:32:10 compute-0 sudo[157486]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:32:10 compute-0 python3.9[157488]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764397930.286957-1303-58847719296659/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:32:10 compute-0 sudo[157486]: pam_unix(sudo:session): session closed for user root
Nov 29 06:32:11 compute-0 sudo[157563]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-szkyjxzrmackfayefczgrmqorplldsaz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397930.286957-1303-58847719296659/AnsiballZ_systemd.py'
Nov 29 06:32:11 compute-0 sudo[157563]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:32:11 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:32:11 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:32:11 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:32:11.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:32:11 compute-0 python3.9[157565]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 29 06:32:11 compute-0 systemd[1]: Reloading.
Nov 29 06:32:11 compute-0 systemd-rc-local-generator[157594]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 06:32:11 compute-0 systemd-sysv-generator[157597]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 06:32:12 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:32:12 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v593: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:32:12 compute-0 sudo[157563]: pam_unix(sudo:session): session closed for user root
Nov 29 06:32:12 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:32:12 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:32:12 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:32:12.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:32:12 compute-0 sudo[157675]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xwcpndfckynwgyqxstokyszluemrenwe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397930.286957-1303-58847719296659/AnsiballZ_systemd.py'
Nov 29 06:32:12 compute-0 sudo[157675]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:32:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 06:32:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:32:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 06:32:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:32:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:32:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:32:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:32:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:32:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:32:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:32:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:32:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:32:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 06:32:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:32:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:32:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:32:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Nov 29 06:32:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:32:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 06:32:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:32:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:32:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:32:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 06:32:12 compute-0 ceph-mon[74654]: pgmap v592: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:32:13 compute-0 python3.9[157677]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 06:32:13 compute-0 systemd[1]: Reloading.
Nov 29 06:32:13 compute-0 systemd-rc-local-generator[157702]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 06:32:13 compute-0 systemd-sysv-generator[157708]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 06:32:13 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:32:13 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:32:13 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:32:13.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:32:13 compute-0 systemd[1]: Starting ovn_metadata_agent container...
Nov 29 06:32:14 compute-0 podman[157716]: 2025-11-29 06:32:14.040254521 +0000 UTC m=+0.331037141 container health_status b3f42e9a710907b47913576d27471d163da731262c1464357cff24681ce600c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 29 06:32:14 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:32:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da201fd80aede9d0b94bcf8a7b6f117abc11be9268ffa9452262c34d0c0a2f68/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Nov 29 06:32:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da201fd80aede9d0b94bcf8a7b6f117abc11be9268ffa9452262c34d0c0a2f68/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 06:32:14 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v594: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:32:14 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 81ea2bcb89266a0110a379c2083d8cc042460d4a35c8ed3bf349dd1083925000.
Nov 29 06:32:14 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:32:14 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:32:14 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:32:14.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:32:14 compute-0 ceph-mon[74654]: pgmap v593: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:32:14 compute-0 podman[157720]: 2025-11-29 06:32:14.971443261 +0000 UTC m=+1.243345352 container init 81ea2bcb89266a0110a379c2083d8cc042460d4a35c8ed3bf349dd1083925000 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 06:32:14 compute-0 ovn_metadata_agent[157760]: + sudo -E kolla_set_configs
Nov 29 06:32:15 compute-0 podman[157720]: 2025-11-29 06:32:15.022735845 +0000 UTC m=+1.294637886 container start 81ea2bcb89266a0110a379c2083d8cc042460d4a35c8ed3bf349dd1083925000 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125)
Nov 29 06:32:15 compute-0 ovn_metadata_agent[157760]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 29 06:32:15 compute-0 ovn_metadata_agent[157760]: INFO:__main__:Validating config file
Nov 29 06:32:15 compute-0 ovn_metadata_agent[157760]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 29 06:32:15 compute-0 ovn_metadata_agent[157760]: INFO:__main__:Copying service configuration files
Nov 29 06:32:15 compute-0 ovn_metadata_agent[157760]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Nov 29 06:32:15 compute-0 ovn_metadata_agent[157760]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Nov 29 06:32:15 compute-0 ovn_metadata_agent[157760]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Nov 29 06:32:15 compute-0 ovn_metadata_agent[157760]: INFO:__main__:Writing out command to execute
Nov 29 06:32:15 compute-0 ovn_metadata_agent[157760]: INFO:__main__:Setting permission for /var/lib/neutron
Nov 29 06:32:15 compute-0 ovn_metadata_agent[157760]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Nov 29 06:32:15 compute-0 ovn_metadata_agent[157760]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Nov 29 06:32:15 compute-0 ovn_metadata_agent[157760]: INFO:__main__:Setting permission for /var/lib/neutron/external
Nov 29 06:32:15 compute-0 ovn_metadata_agent[157760]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Nov 29 06:32:15 compute-0 ovn_metadata_agent[157760]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Nov 29 06:32:15 compute-0 ovn_metadata_agent[157760]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Nov 29 06:32:15 compute-0 ovn_metadata_agent[157760]: ++ cat /run_command
Nov 29 06:32:15 compute-0 ovn_metadata_agent[157760]: + CMD=neutron-ovn-metadata-agent
Nov 29 06:32:15 compute-0 ovn_metadata_agent[157760]: + ARGS=
Nov 29 06:32:15 compute-0 ovn_metadata_agent[157760]: + sudo kolla_copy_cacerts
Nov 29 06:32:15 compute-0 ovn_metadata_agent[157760]: + [[ ! -n '' ]]
Nov 29 06:32:15 compute-0 ovn_metadata_agent[157760]: + . kolla_extend_start
Nov 29 06:32:15 compute-0 ovn_metadata_agent[157760]: Running command: 'neutron-ovn-metadata-agent'
Nov 29 06:32:15 compute-0 ovn_metadata_agent[157760]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Nov 29 06:32:15 compute-0 ovn_metadata_agent[157760]: + umask 0022
Nov 29 06:32:15 compute-0 ovn_metadata_agent[157760]: + exec neutron-ovn-metadata-agent
Nov 29 06:32:15 compute-0 edpm-start-podman-container[157720]: ovn_metadata_agent
Nov 29 06:32:15 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:32:15 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:32:15 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:32:15.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:32:15 compute-0 edpm-start-podman-container[157719]: Creating additional drop-in dependency for "ovn_metadata_agent" (81ea2bcb89266a0110a379c2083d8cc042460d4a35c8ed3bf349dd1083925000)
Nov 29 06:32:15 compute-0 systemd[1]: Reloading.
Nov 29 06:32:15 compute-0 podman[157769]: 2025-11-29 06:32:15.546079674 +0000 UTC m=+0.505772399 container health_status 81ea2bcb89266a0110a379c2083d8cc042460d4a35c8ed3bf349dd1083925000 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 29 06:32:15 compute-0 systemd-rc-local-generator[157840]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 06:32:15 compute-0 systemd-sysv-generator[157844]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 06:32:15 compute-0 systemd[1]: Started ovn_metadata_agent container.
Nov 29 06:32:15 compute-0 ceph-mon[74654]: pgmap v594: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:32:15 compute-0 sudo[157675]: pam_unix(sudo:session): session closed for user root
Nov 29 06:32:16 compute-0 sshd-session[148230]: Connection closed by 192.168.122.30 port 45592
Nov 29 06:32:16 compute-0 sshd-session[148227]: pam_unix(sshd:session): session closed for user zuul
Nov 29 06:32:16 compute-0 systemd[1]: session-48.scope: Deactivated successfully.
Nov 29 06:32:16 compute-0 systemd[1]: session-48.scope: Consumed 58.755s CPU time.
Nov 29 06:32:16 compute-0 systemd-logind[797]: Session 48 logged out. Waiting for processes to exit.
Nov 29 06:32:16 compute-0 systemd-logind[797]: Removed session 48.
Nov 29 06:32:16 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v595: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:32:16 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:32:16 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:32:16 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:32:16.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:32:16 compute-0 ceph-mon[74654]: pgmap v595: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.161 157767 INFO neutron.common.config [-] Logging enabled!
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.162 157767 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.162 157767 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.162 157767 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.163 157767 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.163 157767 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.163 157767 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.163 157767 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.163 157767 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.163 157767 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.163 157767 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.164 157767 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.164 157767 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.164 157767 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.164 157767 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.164 157767 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.164 157767 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.164 157767 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.164 157767 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.165 157767 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.165 157767 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.165 157767 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.165 157767 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.165 157767 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.165 157767 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.165 157767 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.165 157767 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.165 157767 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.165 157767 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.166 157767 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.166 157767 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.166 157767 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.166 157767 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.166 157767 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.166 157767 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.166 157767 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.166 157767 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.166 157767 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.167 157767 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.167 157767 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.167 157767 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.167 157767 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.167 157767 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.167 157767 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.167 157767 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.167 157767 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.167 157767 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.168 157767 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.168 157767 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.168 157767 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.168 157767 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.168 157767 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.168 157767 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.168 157767 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.168 157767 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.168 157767 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.168 157767 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.169 157767 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.169 157767 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.169 157767 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.169 157767 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.169 157767 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.169 157767 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.169 157767 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.169 157767 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.170 157767 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.170 157767 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.170 157767 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.170 157767 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.170 157767 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.170 157767 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.170 157767 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.170 157767 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.171 157767 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.171 157767 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.171 157767 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.171 157767 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.171 157767 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.171 157767 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.171 157767 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.172 157767 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.172 157767 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.172 157767 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.172 157767 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.172 157767 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.172 157767 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.172 157767 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.172 157767 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.172 157767 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.173 157767 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.173 157767 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.173 157767 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.173 157767 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.173 157767 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.173 157767 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.173 157767 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.173 157767 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.173 157767 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.173 157767 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.174 157767 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.174 157767 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.174 157767 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.174 157767 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.174 157767 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.174 157767 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.174 157767 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.175 157767 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.175 157767 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.175 157767 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.175 157767 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.175 157767 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.175 157767 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.175 157767 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.175 157767 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.175 157767 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.176 157767 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.176 157767 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.176 157767 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.176 157767 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.176 157767 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.176 157767 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.176 157767 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.177 157767 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.177 157767 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.177 157767 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.177 157767 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.177 157767 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.177 157767 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.177 157767 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.177 157767 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.178 157767 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.178 157767 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.178 157767 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.178 157767 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.178 157767 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.178 157767 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.178 157767 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.178 157767 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.179 157767 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.179 157767 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.179 157767 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.179 157767 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.179 157767 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.179 157767 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.179 157767 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.179 157767 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.180 157767 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.180 157767 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.180 157767 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.180 157767 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.180 157767 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.180 157767 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.181 157767 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.181 157767 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.181 157767 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.181 157767 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.181 157767 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.182 157767 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.182 157767 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.182 157767 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.182 157767 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.182 157767 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.182 157767 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.182 157767 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.183 157767 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.183 157767 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.183 157767 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.183 157767 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.183 157767 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.183 157767 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.183 157767 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.183 157767 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.183 157767 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.184 157767 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.184 157767 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.184 157767 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.184 157767 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.184 157767 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.184 157767 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.184 157767 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.184 157767 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.185 157767 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.185 157767 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.185 157767 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.185 157767 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.185 157767 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.185 157767 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.185 157767 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.185 157767 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.186 157767 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.186 157767 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.186 157767 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.186 157767 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.186 157767 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.186 157767 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.186 157767 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.186 157767 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.187 157767 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.187 157767 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.187 157767 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.187 157767 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.187 157767 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.187 157767 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.187 157767 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.187 157767 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.188 157767 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.188 157767 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.188 157767 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.188 157767 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.188 157767 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.188 157767 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.188 157767 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.188 157767 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.188 157767 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.188 157767 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.189 157767 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.189 157767 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.189 157767 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.189 157767 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.189 157767 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.189 157767 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.189 157767 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.189 157767 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.189 157767 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.190 157767 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.190 157767 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.190 157767 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.190 157767 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.190 157767 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.190 157767 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.190 157767 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.190 157767 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.190 157767 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.191 157767 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.191 157767 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.191 157767 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.191 157767 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.191 157767 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.191 157767 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.191 157767 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.191 157767 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.191 157767 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.192 157767 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.192 157767 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.192 157767 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.192 157767 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.192 157767 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.192 157767 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.192 157767 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.193 157767 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.193 157767 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.193 157767 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.193 157767 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.193 157767 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.193 157767 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.193 157767 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.194 157767 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.194 157767 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.194 157767 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.194 157767 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.194 157767 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.194 157767 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.194 157767 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.194 157767 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.195 157767 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.195 157767 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.195 157767 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.195 157767 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.195 157767 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.195 157767 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.195 157767 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.195 157767 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.196 157767 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.196 157767 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.196 157767 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.196 157767 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.196 157767 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.196 157767 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.196 157767 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.196 157767 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.196 157767 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.197 157767 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.197 157767 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.197 157767 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.197 157767 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.197 157767 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.197 157767 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.197 157767 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.197 157767 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.198 157767 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.198 157767 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.198 157767 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.198 157767 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.198 157767 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.198 157767 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.198 157767 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.198 157767 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.199 157767 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.199 157767 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.199 157767 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.199 157767 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.199 157767 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.199 157767 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.199 157767 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.208 157767 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.209 157767 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.209 157767 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.209 157767 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.209 157767 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.224 157767 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name 93db784b-4e42-404a-b548-49ad165fd917 (UUID: 93db784b-4e42-404a-b548-49ad165fd917) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.246 157767 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.247 157767 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.247 157767 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.247 157767 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.250 157767 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.256 157767 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.263 157767 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', '93db784b-4e42-404a-b548-49ad165fd917'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7fe9f772b8b0>], external_ids={}, name=93db784b-4e42-404a-b548-49ad165fd917, nb_cfg_timestamp=1764397848072, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.264 157767 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7fe9f7719f70>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.264 157767 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.264 157767 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.265 157767 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.265 157767 INFO oslo_service.service [-] Starting 1 workers
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.269 157767 DEBUG oslo_service.service [-] Started child 157875 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.273 157767 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmpzxveuc71/privsep.sock']
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.273 157875 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-954079'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.315 157875 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.316 157875 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.316 157875 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.321 157875 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.331 157875 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Nov 29 06:32:17 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:32:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.341 157875 INFO eventlet.wsgi.server [-] (157875) wsgi starting up on http:/var/lib/neutron/metadata_proxy
Nov 29 06:32:17 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:32:17 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:32:17 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:32:17.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:32:17 compute-0 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Nov 29 06:32:18 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:18.003 157767 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Nov 29 06:32:18 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:18.004 157767 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpzxveuc71/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Nov 29 06:32:18 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.844 157880 INFO oslo.privsep.daemon [-] privsep daemon starting
Nov 29 06:32:18 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.851 157880 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Nov 29 06:32:18 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.853 157880 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
Nov 29 06:32:18 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:17.854 157880 INFO oslo.privsep.daemon [-] privsep daemon running as pid 157880
Nov 29 06:32:18 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:18.007 157880 DEBUG oslo.privsep.daemon [-] privsep: reply[2da9e522-2821-48ce-a624-c8ed0481daac]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 06:32:18 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v596: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:32:18 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:18.530 157880 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 06:32:18 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:18.530 157880 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 06:32:18 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:18.531 157880 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 06:32:18 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:32:18 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:32:18 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:32:18.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.112 157880 DEBUG oslo.privsep.daemon [-] privsep: reply[c9fe6125-13d3-4b57-b1c7-701ce4d0cd7a]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.114 157767 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=93db784b-4e42-404a-b548-49ad165fd917, column=external_ids, values=({'neutron:ovn-metadata-id': '8bce076b-c275-5b6a-8cac-f4510edf00a8'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.127 157767 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=93db784b-4e42-404a-b548-49ad165fd917, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.133 157767 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.133 157767 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.134 157767 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.134 157767 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.134 157767 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.134 157767 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.134 157767 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.135 157767 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.135 157767 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.135 157767 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.135 157767 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.135 157767 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.136 157767 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.136 157767 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.136 157767 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.136 157767 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.136 157767 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.137 157767 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.137 157767 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.137 157767 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.137 157767 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.137 157767 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.137 157767 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.138 157767 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.138 157767 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.138 157767 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.138 157767 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.138 157767 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.139 157767 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.139 157767 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.139 157767 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.139 157767 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.139 157767 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.139 157767 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.139 157767 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.140 157767 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.140 157767 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.140 157767 DEBUG oslo_service.service [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.140 157767 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.140 157767 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.141 157767 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.141 157767 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.141 157767 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.141 157767 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.142 157767 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.142 157767 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.142 157767 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.142 157767 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.142 157767 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.142 157767 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.143 157767 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.143 157767 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.143 157767 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.143 157767 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.143 157767 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.144 157767 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.144 157767 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.144 157767 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.144 157767 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.144 157767 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.145 157767 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.145 157767 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.145 157767 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.145 157767 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.146 157767 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.146 157767 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.146 157767 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.146 157767 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.146 157767 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.147 157767 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.147 157767 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.147 157767 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.147 157767 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.147 157767 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.148 157767 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.148 157767 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.148 157767 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.148 157767 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.149 157767 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.149 157767 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.149 157767 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.149 157767 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.149 157767 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.150 157767 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.150 157767 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.150 157767 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.150 157767 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.150 157767 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.151 157767 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.151 157767 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.151 157767 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.151 157767 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.151 157767 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.151 157767 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.152 157767 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.152 157767 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.152 157767 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.152 157767 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.152 157767 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.153 157767 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.153 157767 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.153 157767 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.153 157767 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.153 157767 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.154 157767 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.154 157767 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.154 157767 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.154 157767 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.154 157767 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.155 157767 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.155 157767 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.155 157767 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.155 157767 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.156 157767 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.156 157767 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.156 157767 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.156 157767 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.156 157767 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.157 157767 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.157 157767 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.157 157767 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.157 157767 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.157 157767 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.158 157767 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.158 157767 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.158 157767 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.158 157767 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.159 157767 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.159 157767 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.159 157767 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.159 157767 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.159 157767 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.160 157767 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.160 157767 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.160 157767 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.160 157767 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.161 157767 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.161 157767 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.161 157767 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.161 157767 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.161 157767 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.162 157767 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.162 157767 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.162 157767 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.162 157767 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.162 157767 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.162 157767 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.163 157767 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.163 157767 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.163 157767 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.163 157767 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.163 157767 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.164 157767 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.164 157767 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.164 157767 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.164 157767 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.164 157767 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.165 157767 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.165 157767 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.165 157767 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.165 157767 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.165 157767 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.165 157767 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.166 157767 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.166 157767 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.166 157767 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.166 157767 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.166 157767 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.167 157767 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.167 157767 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.167 157767 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.167 157767 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.167 157767 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.168 157767 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.168 157767 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.168 157767 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.168 157767 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.168 157767 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.169 157767 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.169 157767 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.169 157767 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.169 157767 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.169 157767 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.170 157767 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.170 157767 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.170 157767 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.171 157767 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.171 157767 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.171 157767 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.171 157767 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.171 157767 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.172 157767 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.172 157767 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.172 157767 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.172 157767 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.172 157767 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.173 157767 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.173 157767 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.173 157767 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.173 157767 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.173 157767 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.173 157767 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.174 157767 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.174 157767 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.174 157767 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.174 157767 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.174 157767 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.175 157767 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.175 157767 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.175 157767 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.175 157767 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.175 157767 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.176 157767 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.176 157767 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.176 157767 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.176 157767 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.176 157767 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.177 157767 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.177 157767 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.177 157767 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.177 157767 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.177 157767 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.178 157767 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.178 157767 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.178 157767 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.178 157767 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.178 157767 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.178 157767 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.179 157767 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.179 157767 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.179 157767 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.179 157767 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.179 157767 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.180 157767 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.180 157767 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.180 157767 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.180 157767 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.180 157767 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.181 157767 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.181 157767 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.181 157767 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.181 157767 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.181 157767 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.182 157767 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.182 157767 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.182 157767 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.182 157767 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.183 157767 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.183 157767 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.183 157767 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.183 157767 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.183 157767 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.183 157767 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.184 157767 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.184 157767 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.184 157767 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.184 157767 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.184 157767 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.185 157767 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.185 157767 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.185 157767 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.185 157767 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.185 157767 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.186 157767 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.186 157767 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.186 157767 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.186 157767 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.186 157767 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.187 157767 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.187 157767 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.187 157767 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.187 157767 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.187 157767 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.188 157767 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.188 157767 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.188 157767 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.188 157767 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.188 157767 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.189 157767 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.189 157767 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.189 157767 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.189 157767 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.189 157767 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.190 157767 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.190 157767 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.190 157767 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.190 157767 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.190 157767 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.191 157767 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.191 157767 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.191 157767 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.191 157767 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.191 157767 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.192 157767 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.192 157767 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.192 157767 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.192 157767 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.192 157767 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.193 157767 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.193 157767 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.193 157767 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.193 157767 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.193 157767 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:32:19 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:32:19.194 157767 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Nov 29 06:32:19 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:32:19 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:32:19 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:32:19.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:32:20 compute-0 sshd-session[157885]: Invalid user ubuntu from 103.147.159.91 port 53572
Nov 29 06:32:20 compute-0 sshd-session[157885]: Received disconnect from 103.147.159.91 port 53572:11: Bye Bye [preauth]
Nov 29 06:32:20 compute-0 sshd-session[157885]: Disconnected from invalid user ubuntu 103.147.159.91 port 53572 [preauth]
Nov 29 06:32:20 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v597: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:32:20 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:32:20 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:32:20 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:32:20.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:32:21 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:32:21 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:32:21 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:32:21.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:32:22 compute-0 sshd-session[157889]: Accepted publickey for zuul from 192.168.122.30 port 41188 ssh2: ECDSA SHA256:q0RMlXdalxA6snNWza7TmIndlwLWLLpO+sXhiGKqO/I
Nov 29 06:32:22 compute-0 auditd[707]: Audit daemon rotating log files
Nov 29 06:32:22 compute-0 systemd-logind[797]: New session 49 of user zuul.
Nov 29 06:32:22 compute-0 systemd[1]: Started Session 49 of User zuul.
Nov 29 06:32:22 compute-0 sshd-session[157889]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 06:32:22 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:32:22 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v598: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:32:22 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:32:22 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:32:22 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:32:22.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:32:23 compute-0 python3.9[158043]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 06:32:23 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:32:23 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:32:23 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:32:23.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:32:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:32:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:32:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:32:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:32:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:32:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:32:24 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v599: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:32:24 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:32:24 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:32:24 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:32:24.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:32:24 compute-0 sudo[158197]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wrouboggrkvbncbxugraodfzkfyxitxp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397944.0487082-67-14316435904224/AnsiballZ_command.py'
Nov 29 06:32:24 compute-0 sudo[158197]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:32:25 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:32:25 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:32:25 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:32:25.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:32:26 compute-0 sudo[158201]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:32:26 compute-0 sudo[158201]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:32:26 compute-0 sudo[158201]: pam_unix(sudo:session): session closed for user root
Nov 29 06:32:26 compute-0 sudo[158226]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:32:26 compute-0 sudo[158226]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:32:26 compute-0 sudo[158226]: pam_unix(sudo:session): session closed for user root
Nov 29 06:32:26 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v600: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:32:26 compute-0 python3.9[158199]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:32:26 compute-0 sudo[158197]: pam_unix(sudo:session): session closed for user root
Nov 29 06:32:26 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:32:26 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:32:26 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:32:26.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:32:27 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:32:27 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:32:27 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:32:27.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:32:27 compute-0 sudo[158414]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xqygiryqjvbdnmxlabreadxpsmdtmutm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397946.9797378-100-21458647854658/AnsiballZ_systemd_service.py'
Nov 29 06:32:27 compute-0 sudo[158414]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:32:28 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v601: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:32:28 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:32:28 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:32:28 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:32:28.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:32:28 compute-0 ceph-mds[94810]: mds.beacon.cephfs.compute-0.jzycnf missed beacon ack from the monitors
Nov 29 06:32:29 compute-0 python3.9[158416]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 29 06:32:29 compute-0 systemd[1]: Reloading.
Nov 29 06:32:29 compute-0 systemd-rc-local-generator[158443]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 06:32:29 compute-0 systemd-sysv-generator[158446]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 06:32:29 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:32:29 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:32:29 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:32:29.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:32:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 06:32:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 06:32:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 06:32:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 06:32:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 06:32:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 06:32:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 06:32:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 06:32:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 06:32:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 06:32:30 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v602: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:32:30 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:32:30 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:32:30 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:32:30.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:32:30 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:32:31 compute-0 sudo[158414]: pam_unix(sudo:session): session closed for user root
Nov 29 06:32:31 compute-0 sshd-session[158452]: Received disconnect from 79.116.35.29 port 42778:11: Bye Bye [preauth]
Nov 29 06:32:31 compute-0 sshd-session[158452]: Disconnected from authenticating user root 79.116.35.29 port 42778 [preauth]
Nov 29 06:32:31 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:32:31 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:32:31 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:32:31.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:32:32 compute-0 python3.9[158611]: ansible-ansible.builtin.service_facts Invoked
Nov 29 06:32:32 compute-0 network[158629]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 29 06:32:32 compute-0 network[158630]: 'network-scripts' will be removed from distribution in near future.
Nov 29 06:32:32 compute-0 network[158631]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 29 06:32:32 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v603: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:32:32 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:32:32 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:32:32 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:32:32.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:32:32 compute-0 ceph-mds[94810]: mds.beacon.cephfs.compute-0.jzycnf missed beacon ack from the monitors
Nov 29 06:32:33 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:32:33 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:32:33 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:32:33.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:32:34 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v604: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:32:34 compute-0 ceph-mon[74654]: pgmap v596: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:32:34 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:32:34 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:32:34 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:32:34.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:32:35 compute-0 ceph-mon[74654]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Nov 29 06:32:35 compute-0 ceph-mon[74654]: paxos.0).electionLogic(23) init, last seen epoch 23, mid-election, bumping
Nov 29 06:32:35 compute-0 ceph-mon[74654]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 06:32:35 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:32:35 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:32:35 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:32:35.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:32:35 compute-0 ceph-mon[74654]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Nov 29 06:32:35 compute-0 ceph-mon[74654]: paxos.0).electionLogic(27) init, last seen epoch 27, mid-election, bumping
Nov 29 06:32:35 compute-0 ceph-mon[74654]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 06:32:36 compute-0 ceph-mon[74654]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Nov 29 06:32:36 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v605: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:32:36 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:32:36 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:32:36 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:32:36.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:32:36 compute-0 sudo[158893]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wpwbpinxzbbuuebanuwymrolfsuvhphi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397956.4605677-157-270566448279001/AnsiballZ_systemd_service.py'
Nov 29 06:32:36 compute-0 sudo[158893]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:32:36 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Nov 29 06:32:36 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 06:32:36 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.gxdwyy=up:active} 2 up:standby
Nov 29 06:32:36 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : osdmap e139: 3 total, 3 up, 3 in
Nov 29 06:32:36 compute-0 ceph-mon[74654]: log_channel(cluster) log [DBG] : mgrmap e10: compute-0.vxabpq(active, since 15m), standbys: compute-2.ngsyhe, compute-1.gaxpay
Nov 29 06:32:36 compute-0 ceph-mon[74654]: log_channel(cluster) log [INF] : overall HEALTH_OK
Nov 29 06:32:37 compute-0 python3.9[158895]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 06:32:37 compute-0 sudo[158893]: pam_unix(sudo:session): session closed for user root
Nov 29 06:32:37 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:32:37 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:32:37 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:32:37.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:32:37 compute-0 sudo[159047]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qjixnmpoquaumneevnwjpjfnbobaihhe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397957.2723818-157-191259453290237/AnsiballZ_systemd_service.py'
Nov 29 06:32:37 compute-0 sudo[159047]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:32:38 compute-0 python3.9[159049]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 06:32:38 compute-0 sudo[159047]: pam_unix(sudo:session): session closed for user root
Nov 29 06:32:38 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v606: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:32:38 compute-0 ceph-mon[74654]: pgmap v599: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:32:38 compute-0 ceph-mon[74654]: pgmap v600: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:32:38 compute-0 ceph-mon[74654]: pgmap v601: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:32:38 compute-0 ceph-mon[74654]: mon.compute-1 calling monitor election
Nov 29 06:32:38 compute-0 ceph-mon[74654]: mon.compute-2 calling monitor election
Nov 29 06:32:38 compute-0 ceph-mon[74654]: pgmap v602: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:32:38 compute-0 ceph-mon[74654]: pgmap v603: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:32:38 compute-0 ceph-mon[74654]: pgmap v604: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:32:38 compute-0 ceph-mon[74654]: mon.compute-2 is new leader, mons compute-2,compute-1 in quorum (ranks 1,2)
Nov 29 06:32:38 compute-0 ceph-mon[74654]: mon.compute-0 calling monitor election
Nov 29 06:32:38 compute-0 ceph-mon[74654]: overall HEALTH_OK
Nov 29 06:32:38 compute-0 ceph-mon[74654]: mon.compute-2 calling monitor election
Nov 29 06:32:38 compute-0 ceph-mon[74654]: mon.compute-1 calling monitor election
Nov 29 06:32:38 compute-0 ceph-mon[74654]: mon.compute-0 calling monitor election
Nov 29 06:32:38 compute-0 ceph-mon[74654]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Nov 29 06:32:38 compute-0 ceph-mon[74654]: pgmap v605: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:32:38 compute-0 ceph-mon[74654]: monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Nov 29 06:32:38 compute-0 ceph-mon[74654]: fsmap cephfs:1 {0=cephfs.compute-2.gxdwyy=up:active} 2 up:standby
Nov 29 06:32:38 compute-0 ceph-mon[74654]: osdmap e139: 3 total, 3 up, 3 in
Nov 29 06:32:38 compute-0 ceph-mon[74654]: mgrmap e10: compute-0.vxabpq(active, since 15m), standbys: compute-2.ngsyhe, compute-1.gaxpay
Nov 29 06:32:38 compute-0 ceph-mon[74654]: overall HEALTH_OK
Nov 29 06:32:38 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:32:38 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:32:38 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:32:38.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:32:38 compute-0 sudo[159200]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uqovzyiurljjptodvtbpewxgqpvhmgdz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397958.3149176-157-164961547458549/AnsiballZ_systemd_service.py'
Nov 29 06:32:38 compute-0 sudo[159200]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:32:39 compute-0 python3.9[159202]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 06:32:39 compute-0 sudo[159200]: pam_unix(sudo:session): session closed for user root
Nov 29 06:32:39 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:32:39 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:32:39 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:32:39.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:32:39 compute-0 sudo[159354]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ghvvyynxmsrhewzjhycomtdqujubkujg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397959.337331-157-82306310351209/AnsiballZ_systemd_service.py'
Nov 29 06:32:39 compute-0 sudo[159354]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:32:40 compute-0 python3.9[159356]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 06:32:40 compute-0 sudo[159354]: pam_unix(sudo:session): session closed for user root
Nov 29 06:32:40 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v607: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:32:40 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:32:40 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:32:40 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:32:40.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:32:40 compute-0 sudo[159509]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-udvqsskviaimneargceyaclgzshgjcmt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397960.4234939-157-33438907689198/AnsiballZ_systemd_service.py'
Nov 29 06:32:40 compute-0 sudo[159509]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:32:41 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:32:41 compute-0 python3.9[159511]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 06:32:41 compute-0 sudo[159509]: pam_unix(sudo:session): session closed for user root
Nov 29 06:32:41 compute-0 ceph-mon[74654]: pgmap v606: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:32:41 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:32:41 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:32:41 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:32:41.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:32:41 compute-0 sudo[159663]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ayfwzuoagnagboczagohggqbeciuoose ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397961.3158295-157-159380866303485/AnsiballZ_systemd_service.py'
Nov 29 06:32:41 compute-0 sudo[159663]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:32:41 compute-0 sshd-session[159388]: Received disconnect from 104.208.108.166 port 9448:11: Bye Bye [preauth]
Nov 29 06:32:41 compute-0 sshd-session[159388]: Disconnected from authenticating user root 104.208.108.166 port 9448 [preauth]
Nov 29 06:32:41 compute-0 python3.9[159665]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 06:32:42 compute-0 sudo[159663]: pam_unix(sudo:session): session closed for user root
Nov 29 06:32:42 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v608: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:32:42 compute-0 sudo[159816]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pywvhqgycojghubydjcmwxnkmnczwfbj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397962.1875968-157-32804204408747/AnsiballZ_systemd_service.py'
Nov 29 06:32:42 compute-0 sudo[159816]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:32:42 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:32:42 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:32:42 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:32:42.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:32:43 compute-0 python3.9[159818]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 06:32:43 compute-0 sudo[159816]: pam_unix(sudo:session): session closed for user root
Nov 29 06:32:43 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:32:43 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:32:43 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:32:43.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:32:43 compute-0 ceph-mon[74654]: pgmap v607: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:32:44 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v609: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:32:44 compute-0 sudo[159979]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rxwhxfxmallhojzcfysoktvwtrlaqhvd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397963.8484457-313-113884577747386/AnsiballZ_file.py'
Nov 29 06:32:44 compute-0 sudo[159979]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:32:44 compute-0 podman[159944]: 2025-11-29 06:32:44.591056378 +0000 UTC m=+0.162005068 container health_status b3f42e9a710907b47913576d27471d163da731262c1464357cff24681ce600c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 29 06:32:44 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:32:44 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:32:44 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:32:44.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:32:45 compute-0 python3.9[159988]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:32:45 compute-0 sudo[159979]: pam_unix(sudo:session): session closed for user root
Nov 29 06:32:45 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:32:45 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:32:45 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:32:45.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:32:45 compute-0 ceph-mon[74654]: pgmap v608: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:32:45 compute-0 sudo[160148]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bzbunpoapmagpalppeepvjdilckjziwg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397965.497498-313-31319879045898/AnsiballZ_file.py'
Nov 29 06:32:45 compute-0 sudo[160148]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:32:45 compute-0 podman[160150]: 2025-11-29 06:32:45.991673321 +0000 UTC m=+0.065414769 container health_status 81ea2bcb89266a0110a379c2083d8cc042460d4a35c8ed3bf349dd1083925000 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 29 06:32:46 compute-0 sudo[160170]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:32:46 compute-0 sudo[160170]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:32:46 compute-0 sudo[160170]: pam_unix(sudo:session): session closed for user root
Nov 29 06:32:46 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:32:46 compute-0 python3.9[160151]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:32:46 compute-0 sudo[160148]: pam_unix(sudo:session): session closed for user root
Nov 29 06:32:46 compute-0 sudo[160195]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:32:46 compute-0 sudo[160195]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:32:46 compute-0 sudo[160195]: pam_unix(sudo:session): session closed for user root
Nov 29 06:32:46 compute-0 sudo[160220]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:32:46 compute-0 sudo[160220]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:32:46 compute-0 sudo[160220]: pam_unix(sudo:session): session closed for user root
Nov 29 06:32:46 compute-0 sudo[160241]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:32:46 compute-0 sudo[160241]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:32:46 compute-0 sudo[160241]: pam_unix(sudo:session): session closed for user root
Nov 29 06:32:46 compute-0 sudo[160293]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 06:32:46 compute-0 sudo[160293]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:32:46 compute-0 sudo[160304]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:32:46 compute-0 sudo[160304]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:32:46 compute-0 sudo[160304]: pam_unix(sudo:session): session closed for user root
Nov 29 06:32:46 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v610: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:32:46 compute-0 sudo[160484]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aovdnifpxrmfpbcsvfyggwniuousaljk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397966.284565-313-197625946230518/AnsiballZ_file.py'
Nov 29 06:32:46 compute-0 sudo[160484]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:32:46 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:32:46 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:32:46 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:32:46.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:32:46 compute-0 sudo[160293]: pam_unix(sudo:session): session closed for user root
Nov 29 06:32:46 compute-0 python3.9[160486]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:32:46 compute-0 sudo[160484]: pam_unix(sudo:session): session closed for user root
Nov 29 06:32:46 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 06:32:46 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:32:46 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 06:32:46 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 06:32:46 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 06:32:47 compute-0 sudo[160651]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gudsrgoajhyiutxflvflfjzjzsgxmuup ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397966.972839-313-185581642545008/AnsiballZ_file.py'
Nov 29 06:32:47 compute-0 sudo[160651]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:32:47 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:32:47 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:32:47 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:32:47.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:32:48 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v611: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:32:48 compute-0 python3.9[160653]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:32:48 compute-0 sudo[160651]: pam_unix(sudo:session): session closed for user root
Nov 29 06:32:48 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:32:48 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:32:48 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:32:48.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:32:48 compute-0 sudo[160805]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ajdtevzgcohiyxxufwhwkbbzhlqjkekd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397968.6100295-313-201127124824724/AnsiballZ_file.py'
Nov 29 06:32:48 compute-0 sudo[160805]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:32:49 compute-0 python3.9[160807]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:32:49 compute-0 sudo[160805]: pam_unix(sudo:session): session closed for user root
Nov 29 06:32:49 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:32:49 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:32:49 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:32:49.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:32:49 compute-0 sudo[160957]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qqheicnpibqjjqzwjgdbcdwnkqfscvea ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397969.409346-313-91238464264640/AnsiballZ_file.py'
Nov 29 06:32:49 compute-0 sudo[160957]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:32:50 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v612: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:32:50 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:32:50 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:32:50 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:32:50.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:32:50 compute-0 python3.9[160959]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:32:50 compute-0 sudo[160957]: pam_unix(sudo:session): session closed for user root
Nov 29 06:32:51 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:32:51 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:32:51 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:32:51.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:32:51 compute-0 sudo[161112]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aqcauibiiimcxelgnejxbzrlaoldfwxq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397970.999082-313-208988721006323/AnsiballZ_file.py'
Nov 29 06:32:51 compute-0 sudo[161112]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:32:51 compute-0 python3.9[161114]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:32:51 compute-0 sudo[161112]: pam_unix(sudo:session): session closed for user root
Nov 29 06:32:52 compute-0 sshd-session[161060]: Invalid user dmdba from 176.109.67.96 port 50538
Nov 29 06:32:52 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:32:52 compute-0 sshd-session[161060]: Received disconnect from 176.109.67.96 port 50538:11: Bye Bye [preauth]
Nov 29 06:32:52 compute-0 sshd-session[161060]: Disconnected from invalid user dmdba 176.109.67.96 port 50538 [preauth]
Nov 29 06:32:52 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v613: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:32:52 compute-0 sudo[161264]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rqslqztrsenvtovzxlbcruqmlrfvmuav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397972.0132127-463-168465764957921/AnsiballZ_file.py'
Nov 29 06:32:52 compute-0 sudo[161264]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:32:52 compute-0 python3.9[161266]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:32:52 compute-0 sudo[161264]: pam_unix(sudo:session): session closed for user root
Nov 29 06:32:52 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:32:52 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:32:52 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:32:52.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:32:53 compute-0 sudo[161419]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zycecfxesdrncdmjryicmbhfnjgdmytr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397972.7523608-463-136569011942978/AnsiballZ_file.py'
Nov 29 06:32:53 compute-0 sudo[161419]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:32:53 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:32:53 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:32:53 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:32:53.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:32:53 compute-0 python3.9[161421]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:32:53 compute-0 sudo[161419]: pam_unix(sudo:session): session closed for user root
Nov 29 06:32:54 compute-0 ceph-mgr[74948]: [balancer INFO root] Optimize plan auto_2025-11-29_06:32:54
Nov 29 06:32:54 compute-0 ceph-mgr[74948]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 06:32:54 compute-0 ceph-mgr[74948]: [balancer INFO root] do_upmap
Nov 29 06:32:54 compute-0 ceph-mgr[74948]: [balancer INFO root] pools ['cephfs.cephfs.data', 'backups', 'images', 'default.rgw.log', 'vms', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.meta', '.mgr', 'default.rgw.control', '.rgw.root']
Nov 29 06:32:54 compute-0 ceph-mgr[74948]: [balancer INFO root] prepared 0/10 changes
Nov 29 06:32:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:32:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:32:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:32:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:32:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:32:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:32:54 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v614: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:32:54 compute-0 sudo[161572]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pqutvxofpmqzfcmyiwklqggorlalygjr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397974.0621793-463-6068745215230/AnsiballZ_file.py'
Nov 29 06:32:54 compute-0 sudo[161572]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:32:54 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:32:54 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:32:54 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:32:54.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:32:54 compute-0 python3.9[161574]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:32:54 compute-0 sudo[161572]: pam_unix(sudo:session): session closed for user root
Nov 29 06:32:55 compute-0 sudo[161725]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tkzgducxntuqobcabyppuyiqckomlpnf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397974.8733869-463-77196960933932/AnsiballZ_file.py'
Nov 29 06:32:55 compute-0 sudo[161725]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:32:55 compute-0 python3.9[161727]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:32:55 compute-0 sudo[161725]: pam_unix(sudo:session): session closed for user root
Nov 29 06:32:55 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:32:55 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:32:55 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:32:55.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:32:55 compute-0 ceph-mon[74654]: pgmap v609: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:32:55 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:32:56 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev eaca5981-b568-4779-8f07-fa20e06487ca does not exist
Nov 29 06:32:56 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev 7e53b879-a216-47f1-a5eb-1730266c0125 does not exist
Nov 29 06:32:56 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev 7263215f-e632-468a-a8bd-8f23d353ca3b does not exist
Nov 29 06:32:56 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 06:32:56 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 06:32:56 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 06:32:56 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 06:32:56 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 06:32:56 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:32:56 compute-0 sudo[161879]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ipsvblpgpganjrqvshybduziqbhmpbfk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397975.7075708-463-176335052867522/AnsiballZ_file.py'
Nov 29 06:32:56 compute-0 sudo[161879]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:32:56 compute-0 sudo[161881]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:32:56 compute-0 sudo[161881]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:32:56 compute-0 sudo[161881]: pam_unix(sudo:session): session closed for user root
Nov 29 06:32:56 compute-0 sudo[161907]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:32:56 compute-0 sudo[161907]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:32:56 compute-0 sudo[161907]: pam_unix(sudo:session): session closed for user root
Nov 29 06:32:56 compute-0 sudo[161932]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:32:56 compute-0 sudo[161932]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:32:56 compute-0 sudo[161932]: pam_unix(sudo:session): session closed for user root
Nov 29 06:32:56 compute-0 python3.9[161882]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:32:56 compute-0 sudo[161879]: pam_unix(sudo:session): session closed for user root
Nov 29 06:32:56 compute-0 sudo[161957]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Nov 29 06:32:56 compute-0 sudo[161957]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:32:56 compute-0 sshd-session[161832]: Received disconnect from 162.214.92.14 port 59846:11: Bye Bye [preauth]
Nov 29 06:32:56 compute-0 sshd-session[161832]: Disconnected from authenticating user root 162.214.92.14 port 59846 [preauth]
Nov 29 06:32:56 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v615: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:32:56 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:32:56 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:32:56 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:32:56.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:32:56 compute-0 podman[162121]: 2025-11-29 06:32:56.603847019 +0000 UTC m=+0.029645978 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:32:56 compute-0 sudo[162185]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nhxfylvlgubpdjvnqslbbppyydnjjooe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397976.3837616-463-187118452388239/AnsiballZ_file.py'
Nov 29 06:32:56 compute-0 sudo[162185]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:32:57 compute-0 python3.9[162187]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:32:57 compute-0 sudo[162185]: pam_unix(sudo:session): session closed for user root
Nov 29 06:32:57 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:32:57 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:32:57 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:32:57.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:32:57 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:32:57 compute-0 sudo[162338]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pyglisnqfpgfenbfrdspwkixnxbnucjf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397977.2792187-463-233535891127418/AnsiballZ_file.py'
Nov 29 06:32:57 compute-0 sudo[162338]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:32:57 compute-0 podman[162121]: 2025-11-29 06:32:57.682663769 +0000 UTC m=+1.108462718 container create 26acf1bfa2f77e03b70753ce29d97d89003b8f5d60b94cac0bda1291f4d13c19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_satoshi, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 29 06:32:57 compute-0 ceph-mon[74654]: pgmap v610: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:32:57 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:32:57 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 06:32:57 compute-0 ceph-mon[74654]: pgmap v611: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:32:57 compute-0 ceph-mon[74654]: pgmap v612: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:32:57 compute-0 ceph-mon[74654]: pgmap v613: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:32:57 compute-0 ceph-mon[74654]: pgmap v614: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:32:57 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:32:57 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 06:32:57 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 06:32:57 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:32:57 compute-0 ceph-mon[74654]: pgmap v615: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:32:57 compute-0 systemd[1]: Started libpod-conmon-26acf1bfa2f77e03b70753ce29d97d89003b8f5d60b94cac0bda1291f4d13c19.scope.
Nov 29 06:32:57 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:32:57 compute-0 python3.9[162340]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:32:57 compute-0 sudo[162338]: pam_unix(sudo:session): session closed for user root
Nov 29 06:32:58 compute-0 podman[162121]: 2025-11-29 06:32:58.214556504 +0000 UTC m=+1.640355413 container init 26acf1bfa2f77e03b70753ce29d97d89003b8f5d60b94cac0bda1291f4d13c19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_satoshi, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 06:32:58 compute-0 podman[162121]: 2025-11-29 06:32:58.227265638 +0000 UTC m=+1.653064547 container start 26acf1bfa2f77e03b70753ce29d97d89003b8f5d60b94cac0bda1291f4d13c19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_satoshi, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 06:32:58 compute-0 systemd[1]: libpod-26acf1bfa2f77e03b70753ce29d97d89003b8f5d60b94cac0bda1291f4d13c19.scope: Deactivated successfully.
Nov 29 06:32:58 compute-0 trusting_satoshi[162343]: 167 167
Nov 29 06:32:58 compute-0 conmon[162343]: conmon 26acf1bfa2f77e03b707 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-26acf1bfa2f77e03b70753ce29d97d89003b8f5d60b94cac0bda1291f4d13c19.scope/container/memory.events
Nov 29 06:32:58 compute-0 podman[162121]: 2025-11-29 06:32:58.287414936 +0000 UTC m=+1.713213895 container attach 26acf1bfa2f77e03b70753ce29d97d89003b8f5d60b94cac0bda1291f4d13c19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_satoshi, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 29 06:32:58 compute-0 podman[162121]: 2025-11-29 06:32:58.288567519 +0000 UTC m=+1.714366428 container died 26acf1bfa2f77e03b70753ce29d97d89003b8f5d60b94cac0bda1291f4d13c19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_satoshi, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 06:32:58 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v616: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:32:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-1bcc9a1ad1f4b7f1d84ed977f41bc21d4ce75a967421c43bcb35efd53329093f-merged.mount: Deactivated successfully.
Nov 29 06:32:58 compute-0 sudo[162510]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jhheenvlwogcbdmhldrqiqknqjszpqrz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397978.1663167-616-126454856989741/AnsiballZ_command.py'
Nov 29 06:32:58 compute-0 sudo[162510]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:32:58 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:32:58 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:32:58 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:32:58.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:32:58 compute-0 python3.9[162512]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:32:58 compute-0 podman[162121]: 2025-11-29 06:32:58.73387009 +0000 UTC m=+2.159668989 container remove 26acf1bfa2f77e03b70753ce29d97d89003b8f5d60b94cac0bda1291f4d13c19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_satoshi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 29 06:32:58 compute-0 systemd[1]: libpod-conmon-26acf1bfa2f77e03b70753ce29d97d89003b8f5d60b94cac0bda1291f4d13c19.scope: Deactivated successfully.
Nov 29 06:32:58 compute-0 sudo[162510]: pam_unix(sudo:session): session closed for user root
Nov 29 06:32:58 compute-0 podman[162546]: 2025-11-29 06:32:58.875473755 +0000 UTC m=+0.024900223 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:32:59 compute-0 podman[162546]: 2025-11-29 06:32:59.284311555 +0000 UTC m=+0.433737973 container create 310023237c9171541669dfd093d4e036bd3bba2126ffdb05bae28861a08693e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_bose, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 06:32:59 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:32:59 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:32:59 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:32:59.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:32:59 compute-0 systemd[1]: Started libpod-conmon-310023237c9171541669dfd093d4e036bd3bba2126ffdb05bae28861a08693e1.scope.
Nov 29 06:32:59 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:32:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a4d363fd95cb12eb16a14f4c2d5015258ac8aba7f902dcbb8c5e23a81d31790/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 06:32:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a4d363fd95cb12eb16a14f4c2d5015258ac8aba7f902dcbb8c5e23a81d31790/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:32:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a4d363fd95cb12eb16a14f4c2d5015258ac8aba7f902dcbb8c5e23a81d31790/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:32:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a4d363fd95cb12eb16a14f4c2d5015258ac8aba7f902dcbb8c5e23a81d31790/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 06:32:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a4d363fd95cb12eb16a14f4c2d5015258ac8aba7f902dcbb8c5e23a81d31790/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 06:33:00 compute-0 python3.9[162691]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 29 06:33:00 compute-0 podman[162546]: 2025-11-29 06:33:00.344304857 +0000 UTC m=+1.493731265 container init 310023237c9171541669dfd093d4e036bd3bba2126ffdb05bae28861a08693e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_bose, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 29 06:33:00 compute-0 podman[162546]: 2025-11-29 06:33:00.360567171 +0000 UTC m=+1.509993589 container start 310023237c9171541669dfd093d4e036bd3bba2126ffdb05bae28861a08693e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_bose, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 06:33:00 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v617: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:33:00 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:33:00 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:33:00 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:33:00.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:33:00 compute-0 podman[162546]: 2025-11-29 06:33:00.941022584 +0000 UTC m=+2.090448992 container attach 310023237c9171541669dfd093d4e036bd3bba2126ffdb05bae28861a08693e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_bose, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 06:33:01 compute-0 sudo[162848]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vnllkywwwmxwoiaheevbtipardqsnkjb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397980.7391388-670-162443532576265/AnsiballZ_systemd_service.py'
Nov 29 06:33:01 compute-0 sudo[162848]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:33:01 compute-0 xenodochial_bose[162662]: --> passed data devices: 0 physical, 1 LVM
Nov 29 06:33:01 compute-0 xenodochial_bose[162662]: --> relative data size: 1.0
Nov 29 06:33:01 compute-0 xenodochial_bose[162662]: --> All data devices are unavailable
Nov 29 06:33:01 compute-0 systemd[1]: libpod-310023237c9171541669dfd093d4e036bd3bba2126ffdb05bae28861a08693e1.scope: Deactivated successfully.
Nov 29 06:33:01 compute-0 podman[162546]: 2025-11-29 06:33:01.192409316 +0000 UTC m=+2.341835704 container died 310023237c9171541669dfd093d4e036bd3bba2126ffdb05bae28861a08693e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_bose, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 29 06:33:01 compute-0 python3.9[162852]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 29 06:33:01 compute-0 systemd[1]: Reloading.
Nov 29 06:33:01 compute-0 systemd-rc-local-generator[162892]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 06:33:01 compute-0 systemd-sysv-generator[162895]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 06:33:01 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:33:01 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:33:01 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:33:01.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:33:01 compute-0 sudo[162848]: pam_unix(sudo:session): session closed for user root
Nov 29 06:33:01 compute-0 ceph-mon[74654]: pgmap v616: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:33:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-4a4d363fd95cb12eb16a14f4c2d5015258ac8aba7f902dcbb8c5e23a81d31790-merged.mount: Deactivated successfully.
Nov 29 06:33:01 compute-0 podman[162546]: 2025-11-29 06:33:01.978468232 +0000 UTC m=+3.127894650 container remove 310023237c9171541669dfd093d4e036bd3bba2126ffdb05bae28861a08693e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_bose, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 29 06:33:02 compute-0 sudo[161957]: pam_unix(sudo:session): session closed for user root
Nov 29 06:33:02 compute-0 sudo[162928]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:33:02 compute-0 systemd[1]: libpod-conmon-310023237c9171541669dfd093d4e036bd3bba2126ffdb05bae28861a08693e1.scope: Deactivated successfully.
Nov 29 06:33:02 compute-0 sudo[162928]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:33:02 compute-0 sudo[162928]: pam_unix(sudo:session): session closed for user root
Nov 29 06:33:02 compute-0 sudo[162953]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:33:02 compute-0 sudo[162953]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:33:02 compute-0 sudo[162953]: pam_unix(sudo:session): session closed for user root
Nov 29 06:33:02 compute-0 sudo[162978]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:33:02 compute-0 sudo[162978]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:33:02 compute-0 sudo[162978]: pam_unix(sudo:session): session closed for user root
Nov 29 06:33:02 compute-0 sudo[163003]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -- lvm list --format json
Nov 29 06:33:02 compute-0 sudo[163003]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:33:02 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v618: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:33:02 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:33:02 compute-0 sudo[163206]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nuspqadpborsfppbnnzbkkuopdrcrgil ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397982.3295028-694-189255467810611/AnsiballZ_command.py'
Nov 29 06:33:02 compute-0 sudo[163206]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:33:02 compute-0 podman[163166]: 2025-11-29 06:33:02.578123732 +0000 UTC m=+0.020323922 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:33:02 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:33:02 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:33:02 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:33:02.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:33:02 compute-0 python3.9[163208]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:33:02 compute-0 sudo[163206]: pam_unix(sudo:session): session closed for user root
Nov 29 06:33:03 compute-0 podman[163166]: 2025-11-29 06:33:03.243252724 +0000 UTC m=+0.685452894 container create 25384d1dc3bdb5fd583bfb1ff34a8ffa852385842a09a0271556f020349655aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_curran, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 29 06:33:03 compute-0 sudo[163360]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pqdisxoymogwjfwritehpnypiqafnxoa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397983.0125499-694-2341624473477/AnsiballZ_command.py'
Nov 29 06:33:03 compute-0 sudo[163360]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:33:03 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:33:03 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:33:03 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:33:03.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:33:03 compute-0 systemd[1]: Started libpod-conmon-25384d1dc3bdb5fd583bfb1ff34a8ffa852385842a09a0271556f020349655aa.scope.
Nov 29 06:33:03 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:33:03 compute-0 python3.9[163362]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:33:03 compute-0 podman[163166]: 2025-11-29 06:33:03.57530258 +0000 UTC m=+1.017502810 container init 25384d1dc3bdb5fd583bfb1ff34a8ffa852385842a09a0271556f020349655aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_curran, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 06:33:03 compute-0 podman[163166]: 2025-11-29 06:33:03.58126712 +0000 UTC m=+1.023467290 container start 25384d1dc3bdb5fd583bfb1ff34a8ffa852385842a09a0271556f020349655aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_curran, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 06:33:03 compute-0 awesome_curran[163365]: 167 167
Nov 29 06:33:03 compute-0 systemd[1]: libpod-25384d1dc3bdb5fd583bfb1ff34a8ffa852385842a09a0271556f020349655aa.scope: Deactivated successfully.
Nov 29 06:33:03 compute-0 podman[163166]: 2025-11-29 06:33:03.588331652 +0000 UTC m=+1.030531822 container attach 25384d1dc3bdb5fd583bfb1ff34a8ffa852385842a09a0271556f020349655aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_curran, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 06:33:03 compute-0 podman[163166]: 2025-11-29 06:33:03.588905578 +0000 UTC m=+1.031105768 container died 25384d1dc3bdb5fd583bfb1ff34a8ffa852385842a09a0271556f020349655aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_curran, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 29 06:33:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-06ed1fa480b9bc47727b6af792927a984659e8f346b2402a672dbc6b5b53e6d9-merged.mount: Deactivated successfully.
Nov 29 06:33:03 compute-0 podman[163166]: 2025-11-29 06:33:03.629476257 +0000 UTC m=+1.071676427 container remove 25384d1dc3bdb5fd583bfb1ff34a8ffa852385842a09a0271556f020349655aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_curran, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 29 06:33:03 compute-0 systemd[1]: libpod-conmon-25384d1dc3bdb5fd583bfb1ff34a8ffa852385842a09a0271556f020349655aa.scope: Deactivated successfully.
Nov 29 06:33:03 compute-0 podman[163390]: 2025-11-29 06:33:03.79200089 +0000 UTC m=+0.038932293 container create 7634b6788e77965e38d94054b0df4815d91a6887c7abfa18dab20fe45fa238a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_driscoll, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 06:33:03 compute-0 systemd[1]: Started libpod-conmon-7634b6788e77965e38d94054b0df4815d91a6887c7abfa18dab20fe45fa238a3.scope.
Nov 29 06:33:03 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:33:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6721d42e9225f604617b00f1b475769387969ec65ec577479e491bdf3d705b2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 06:33:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6721d42e9225f604617b00f1b475769387969ec65ec577479e491bdf3d705b2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:33:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6721d42e9225f604617b00f1b475769387969ec65ec577479e491bdf3d705b2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:33:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6721d42e9225f604617b00f1b475769387969ec65ec577479e491bdf3d705b2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 06:33:03 compute-0 podman[163390]: 2025-11-29 06:33:03.854245439 +0000 UTC m=+0.101176832 container init 7634b6788e77965e38d94054b0df4815d91a6887c7abfa18dab20fe45fa238a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_driscoll, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 06:33:03 compute-0 podman[163390]: 2025-11-29 06:33:03.86057718 +0000 UTC m=+0.107508573 container start 7634b6788e77965e38d94054b0df4815d91a6887c7abfa18dab20fe45fa238a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_driscoll, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 06:33:03 compute-0 podman[163390]: 2025-11-29 06:33:03.863634017 +0000 UTC m=+0.110565410 container attach 7634b6788e77965e38d94054b0df4815d91a6887c7abfa18dab20fe45fa238a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_driscoll, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 06:33:03 compute-0 podman[163390]: 2025-11-29 06:33:03.775157409 +0000 UTC m=+0.022088822 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:33:04 compute-0 ceph-mon[74654]: pgmap v617: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:33:04 compute-0 ceph-mon[74654]: pgmap v618: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:33:04 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v619: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:33:04 compute-0 sudo[163360]: pam_unix(sudo:session): session closed for user root
Nov 29 06:33:04 compute-0 infallible_driscoll[163406]: {
Nov 29 06:33:04 compute-0 infallible_driscoll[163406]:     "1": [
Nov 29 06:33:04 compute-0 infallible_driscoll[163406]:         {
Nov 29 06:33:04 compute-0 infallible_driscoll[163406]:             "devices": [
Nov 29 06:33:04 compute-0 infallible_driscoll[163406]:                 "/dev/loop3"
Nov 29 06:33:04 compute-0 infallible_driscoll[163406]:             ],
Nov 29 06:33:04 compute-0 infallible_driscoll[163406]:             "lv_name": "ceph_lv0",
Nov 29 06:33:04 compute-0 infallible_driscoll[163406]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 06:33:04 compute-0 infallible_driscoll[163406]:             "lv_size": "7511998464",
Nov 29 06:33:04 compute-0 infallible_driscoll[163406]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=336ec58c-893b-528f-a0c1-6ed1196bc047,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=91f280f1-e534-4adc-bf70-98711580c2dd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 06:33:04 compute-0 infallible_driscoll[163406]:             "lv_uuid": "G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP",
Nov 29 06:33:04 compute-0 infallible_driscoll[163406]:             "name": "ceph_lv0",
Nov 29 06:33:04 compute-0 infallible_driscoll[163406]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 06:33:04 compute-0 infallible_driscoll[163406]:             "tags": {
Nov 29 06:33:04 compute-0 infallible_driscoll[163406]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 06:33:04 compute-0 infallible_driscoll[163406]:                 "ceph.block_uuid": "G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP",
Nov 29 06:33:04 compute-0 infallible_driscoll[163406]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 06:33:04 compute-0 infallible_driscoll[163406]:                 "ceph.cluster_fsid": "336ec58c-893b-528f-a0c1-6ed1196bc047",
Nov 29 06:33:04 compute-0 infallible_driscoll[163406]:                 "ceph.cluster_name": "ceph",
Nov 29 06:33:04 compute-0 infallible_driscoll[163406]:                 "ceph.crush_device_class": "",
Nov 29 06:33:04 compute-0 infallible_driscoll[163406]:                 "ceph.encrypted": "0",
Nov 29 06:33:04 compute-0 infallible_driscoll[163406]:                 "ceph.osd_fsid": "91f280f1-e534-4adc-bf70-98711580c2dd",
Nov 29 06:33:04 compute-0 infallible_driscoll[163406]:                 "ceph.osd_id": "1",
Nov 29 06:33:04 compute-0 infallible_driscoll[163406]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 06:33:04 compute-0 infallible_driscoll[163406]:                 "ceph.type": "block",
Nov 29 06:33:04 compute-0 infallible_driscoll[163406]:                 "ceph.vdo": "0"
Nov 29 06:33:04 compute-0 infallible_driscoll[163406]:             },
Nov 29 06:33:04 compute-0 infallible_driscoll[163406]:             "type": "block",
Nov 29 06:33:04 compute-0 infallible_driscoll[163406]:             "vg_name": "ceph_vg0"
Nov 29 06:33:04 compute-0 infallible_driscoll[163406]:         }
Nov 29 06:33:04 compute-0 infallible_driscoll[163406]:     ]
Nov 29 06:33:04 compute-0 infallible_driscoll[163406]: }
Nov 29 06:33:04 compute-0 systemd[1]: libpod-7634b6788e77965e38d94054b0df4815d91a6887c7abfa18dab20fe45fa238a3.scope: Deactivated successfully.
Nov 29 06:33:04 compute-0 podman[163390]: 2025-11-29 06:33:04.668616574 +0000 UTC m=+0.915547987 container died 7634b6788e77965e38d94054b0df4815d91a6887c7abfa18dab20fe45fa238a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_driscoll, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 29 06:33:04 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:33:04 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:33:04 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:33:04.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:33:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-b6721d42e9225f604617b00f1b475769387969ec65ec577479e491bdf3d705b2-merged.mount: Deactivated successfully.
Nov 29 06:33:04 compute-0 podman[163390]: 2025-11-29 06:33:04.729869654 +0000 UTC m=+0.976801047 container remove 7634b6788e77965e38d94054b0df4815d91a6887c7abfa18dab20fe45fa238a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_driscoll, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 29 06:33:04 compute-0 systemd[1]: libpod-conmon-7634b6788e77965e38d94054b0df4815d91a6887c7abfa18dab20fe45fa238a3.scope: Deactivated successfully.
Nov 29 06:33:04 compute-0 sudo[163003]: pam_unix(sudo:session): session closed for user root
Nov 29 06:33:04 compute-0 sudo[163481]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:33:04 compute-0 sudo[163481]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:33:04 compute-0 sudo[163481]: pam_unix(sudo:session): session closed for user root
Nov 29 06:33:04 compute-0 sudo[163539]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:33:04 compute-0 sudo[163539]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:33:04 compute-0 sudo[163539]: pam_unix(sudo:session): session closed for user root
Nov 29 06:33:04 compute-0 sudo[163589]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:33:04 compute-0 sudo[163589]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:33:04 compute-0 sudo[163589]: pam_unix(sudo:session): session closed for user root
Nov 29 06:33:05 compute-0 sudo[163674]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rtiapekbnnsxxyoijvonolnqyegwdxtn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397984.743698-694-153854321292734/AnsiballZ_command.py'
Nov 29 06:33:05 compute-0 sudo[163627]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -- raw list --format json
Nov 29 06:33:05 compute-0 sudo[163674]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:33:05 compute-0 sudo[163627]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:33:05 compute-0 python3.9[163677]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:33:05 compute-0 sudo[163674]: pam_unix(sudo:session): session closed for user root
Nov 29 06:33:05 compute-0 podman[163744]: 2025-11-29 06:33:05.338616725 +0000 UTC m=+0.033754636 container create 601cd3aa451a375677818abc071a5ac9c2c6930cd03f1bc8ee10cb9ab7d25998 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_shaw, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 06:33:05 compute-0 systemd[1]: Started libpod-conmon-601cd3aa451a375677818abc071a5ac9c2c6930cd03f1bc8ee10cb9ab7d25998.scope.
Nov 29 06:33:05 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:33:05 compute-0 podman[163744]: 2025-11-29 06:33:05.323948626 +0000 UTC m=+0.019086567 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:33:05 compute-0 podman[163744]: 2025-11-29 06:33:05.424847848 +0000 UTC m=+0.119985799 container init 601cd3aa451a375677818abc071a5ac9c2c6930cd03f1bc8ee10cb9ab7d25998 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_shaw, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 29 06:33:05 compute-0 podman[163744]: 2025-11-29 06:33:05.433933368 +0000 UTC m=+0.129071289 container start 601cd3aa451a375677818abc071a5ac9c2c6930cd03f1bc8ee10cb9ab7d25998 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_shaw, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 29 06:33:05 compute-0 podman[163744]: 2025-11-29 06:33:05.437538271 +0000 UTC m=+0.132676192 container attach 601cd3aa451a375677818abc071a5ac9c2c6930cd03f1bc8ee10cb9ab7d25998 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_shaw, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 29 06:33:05 compute-0 hardcore_shaw[163782]: 167 167
Nov 29 06:33:05 compute-0 systemd[1]: libpod-601cd3aa451a375677818abc071a5ac9c2c6930cd03f1bc8ee10cb9ab7d25998.scope: Deactivated successfully.
Nov 29 06:33:05 compute-0 podman[163744]: 2025-11-29 06:33:05.442732379 +0000 UTC m=+0.137870300 container died 601cd3aa451a375677818abc071a5ac9c2c6930cd03f1bc8ee10cb9ab7d25998 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_shaw, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 29 06:33:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-df6f04d5e51d675a7ead722a661490a642a8ddc7f6e8764a6d70b5867361c0d2-merged.mount: Deactivated successfully.
Nov 29 06:33:05 compute-0 podman[163744]: 2025-11-29 06:33:05.476689939 +0000 UTC m=+0.171827860 container remove 601cd3aa451a375677818abc071a5ac9c2c6930cd03f1bc8ee10cb9ab7d25998 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_shaw, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 29 06:33:05 compute-0 systemd[1]: libpod-conmon-601cd3aa451a375677818abc071a5ac9c2c6930cd03f1bc8ee10cb9ab7d25998.scope: Deactivated successfully.
Nov 29 06:33:05 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:33:05 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:33:05 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:33:05.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:33:05 compute-0 sudo[163921]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwoypionzgfypwywxnwfmlghcuyqgxxk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397985.3653634-694-117464091005535/AnsiballZ_command.py'
Nov 29 06:33:05 compute-0 sudo[163921]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:33:05 compute-0 podman[163881]: 2025-11-29 06:33:05.635765824 +0000 UTC m=+0.026521029 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:33:05 compute-0 python3.9[163923]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:33:05 compute-0 sudo[163921]: pam_unix(sudo:session): session closed for user root
Nov 29 06:33:06 compute-0 podman[163881]: 2025-11-29 06:33:06.162731347 +0000 UTC m=+0.553486522 container create ec5ef4187d31c60a36ab011e815bb8fe093027af562543e90170a7801fd8250f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_albattani, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 06:33:06 compute-0 sudo[164042]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:33:06 compute-0 sudo[164042]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:33:06 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v620: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:33:06 compute-0 sudo[164042]: pam_unix(sudo:session): session closed for user root
Nov 29 06:33:06 compute-0 ceph-mon[74654]: pgmap v619: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:33:06 compute-0 sudo[164097]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:33:06 compute-0 sudo[164122]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bavhxfblxlhoskhakjkdcyqtlpwqxenk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397986.0930974-694-168448992070758/AnsiballZ_command.py'
Nov 29 06:33:06 compute-0 sudo[164097]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:33:06 compute-0 sudo[164122]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:33:06 compute-0 sudo[164097]: pam_unix(sudo:session): session closed for user root
Nov 29 06:33:06 compute-0 systemd[1]: Started libpod-conmon-ec5ef4187d31c60a36ab011e815bb8fe093027af562543e90170a7801fd8250f.scope.
Nov 29 06:33:06 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:33:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1b45609cfea26294b4b655fbf5ccb70b9fb5f2aaade1a546216aea03f44ea54/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 06:33:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1b45609cfea26294b4b655fbf5ccb70b9fb5f2aaade1a546216aea03f44ea54/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:33:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1b45609cfea26294b4b655fbf5ccb70b9fb5f2aaade1a546216aea03f44ea54/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:33:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1b45609cfea26294b4b655fbf5ccb70b9fb5f2aaade1a546216aea03f44ea54/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 06:33:06 compute-0 podman[163881]: 2025-11-29 06:33:06.600263057 +0000 UTC m=+0.991018252 container init ec5ef4187d31c60a36ab011e815bb8fe093027af562543e90170a7801fd8250f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_albattani, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 06:33:06 compute-0 podman[163881]: 2025-11-29 06:33:06.614287707 +0000 UTC m=+1.005042922 container start ec5ef4187d31c60a36ab011e815bb8fe093027af562543e90170a7801fd8250f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_albattani, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 29 06:33:06 compute-0 podman[163881]: 2025-11-29 06:33:06.618363964 +0000 UTC m=+1.009119229 container attach ec5ef4187d31c60a36ab011e815bb8fe093027af562543e90170a7801fd8250f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_albattani, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Nov 29 06:33:06 compute-0 python3.9[164126]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:33:06 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:33:06 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:33:06 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:33:06.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:33:06 compute-0 sudo[164122]: pam_unix(sudo:session): session closed for user root
Nov 29 06:33:07 compute-0 sudo[164285]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evqhtcienpoftowjxhsjqrmxvxhlghve ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397986.8424916-694-224222351300174/AnsiballZ_command.py'
Nov 29 06:33:07 compute-0 sudo[164285]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:33:07 compute-0 python3.9[164287]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:33:07 compute-0 sudo[164285]: pam_unix(sudo:session): session closed for user root
Nov 29 06:33:07 compute-0 keen_albattani[164129]: {
Nov 29 06:33:07 compute-0 keen_albattani[164129]:     "91f280f1-e534-4adc-bf70-98711580c2dd": {
Nov 29 06:33:07 compute-0 keen_albattani[164129]:         "ceph_fsid": "336ec58c-893b-528f-a0c1-6ed1196bc047",
Nov 29 06:33:07 compute-0 keen_albattani[164129]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 06:33:07 compute-0 keen_albattani[164129]:         "osd_id": 1,
Nov 29 06:33:07 compute-0 keen_albattani[164129]:         "osd_uuid": "91f280f1-e534-4adc-bf70-98711580c2dd",
Nov 29 06:33:07 compute-0 keen_albattani[164129]:         "type": "bluestore"
Nov 29 06:33:07 compute-0 keen_albattani[164129]:     }
Nov 29 06:33:07 compute-0 keen_albattani[164129]: }
Nov 29 06:33:07 compute-0 systemd[1]: libpod-ec5ef4187d31c60a36ab011e815bb8fe093027af562543e90170a7801fd8250f.scope: Deactivated successfully.
Nov 29 06:33:07 compute-0 podman[163881]: 2025-11-29 06:33:07.458701701 +0000 UTC m=+1.849456876 container died ec5ef4187d31c60a36ab011e815bb8fe093027af562543e90170a7801fd8250f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_albattani, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 06:33:07 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:33:07 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:33:07 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:33:07.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:33:07 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:33:07 compute-0 sudo[164464]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ahajitotggesbpaljnkdhfiodaztmtlk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397987.554724-694-166986358524423/AnsiballZ_command.py'
Nov 29 06:33:07 compute-0 sudo[164464]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:33:08 compute-0 python3.9[164466]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:33:08 compute-0 sudo[164464]: pam_unix(sudo:session): session closed for user root
Nov 29 06:33:08 compute-0 ceph-mon[74654]: pgmap v620: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:33:08 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v621: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:33:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-c1b45609cfea26294b4b655fbf5ccb70b9fb5f2aaade1a546216aea03f44ea54-merged.mount: Deactivated successfully.
Nov 29 06:33:08 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:33:08 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:33:08 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:33:08.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:33:09 compute-0 sudo[164620]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oldobxumbquwzpsivppbqnohbthswgmg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397988.8130345-856-239607885083418/AnsiballZ_getent.py'
Nov 29 06:33:09 compute-0 sudo[164620]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:33:09 compute-0 python3.9[164622]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Nov 29 06:33:09 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:33:09 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:33:09 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:33:09.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:33:09 compute-0 sudo[164620]: pam_unix(sudo:session): session closed for user root
Nov 29 06:33:09 compute-0 podman[163881]: 2025-11-29 06:33:09.54284411 +0000 UTC m=+3.933599285 container remove ec5ef4187d31c60a36ab011e815bb8fe093027af562543e90170a7801fd8250f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_albattani, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True)
Nov 29 06:33:09 compute-0 systemd[1]: libpod-conmon-ec5ef4187d31c60a36ab011e815bb8fe093027af562543e90170a7801fd8250f.scope: Deactivated successfully.
Nov 29 06:33:09 compute-0 sudo[163627]: pam_unix(sudo:session): session closed for user root
Nov 29 06:33:09 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 06:33:10 compute-0 ceph-mon[74654]: pgmap v621: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:33:10 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:33:10 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 06:33:10 compute-0 sudo[164773]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gcbutuzmoftlzieohtxyzzlghreyjrlj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397989.7267504-880-17716672585085/AnsiballZ_group.py'
Nov 29 06:33:10 compute-0 sudo[164773]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:33:10 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v622: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:33:10 compute-0 python3.9[164775]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 29 06:33:10 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:33:10 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:33:10 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:33:10.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:33:10 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:33:10 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev cebc02be-33a7-4b95-977c-44d85ce63f94 does not exist
Nov 29 06:33:10 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev 1602ac6f-12fb-4e13-a643-23086e3e6f46 does not exist
Nov 29 06:33:10 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev 0482c5ba-2734-477d-bf4c-b02a02194aad does not exist
Nov 29 06:33:10 compute-0 groupadd[164776]: group added to /etc/group: name=libvirt, GID=42473
Nov 29 06:33:10 compute-0 groupadd[164776]: group added to /etc/gshadow: name=libvirt
Nov 29 06:33:10 compute-0 groupadd[164776]: new group: name=libvirt, GID=42473
Nov 29 06:33:10 compute-0 sudo[164777]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:33:10 compute-0 sudo[164777]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:33:10 compute-0 sudo[164777]: pam_unix(sudo:session): session closed for user root
Nov 29 06:33:10 compute-0 sudo[164773]: pam_unix(sudo:session): session closed for user root
Nov 29 06:33:10 compute-0 sudo[164807]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 06:33:10 compute-0 sudo[164807]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:33:10 compute-0 sudo[164807]: pam_unix(sudo:session): session closed for user root
Nov 29 06:33:11 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:33:11 compute-0 ceph-mon[74654]: pgmap v622: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:33:11 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:33:11 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:33:11 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:33:11 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:33:11.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:33:11 compute-0 sudo[164982]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fbavyzncjjycvgnilbyiroqrexydlhwd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397991.051812-904-223207666342927/AnsiballZ_user.py'
Nov 29 06:33:11 compute-0 sudo[164982]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:33:11 compute-0 python3.9[164984]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 29 06:33:12 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v623: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:33:12 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 06:33:12 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 06:33:12 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:33:12 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:33:12 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:33:12 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:33:12.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:33:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 06:33:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:33:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 06:33:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:33:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:33:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:33:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:33:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:33:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:33:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:33:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:33:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:33:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 06:33:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:33:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:33:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:33:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Nov 29 06:33:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:33:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 06:33:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:33:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:33:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:33:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 06:33:13 compute-0 useradd[164986]: new user: name=libvirt, UID=42473, GID=42473, home=/home/libvirt, shell=/sbin/nologin, from=/dev/pts/0
Nov 29 06:33:13 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:33:13 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:33:13 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:33:13.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:33:13 compute-0 ceph-mon[74654]: pgmap v623: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:33:14 compute-0 sshd-session[164989]: Received disconnect from 31.6.212.12 port 43610:11: Bye Bye [preauth]
Nov 29 06:33:14 compute-0 sshd-session[164989]: Disconnected from authenticating user root 31.6.212.12 port 43610 [preauth]
Nov 29 06:33:14 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v624: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:33:14 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:33:14 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:33:14 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:33:14.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:33:15 compute-0 podman[164992]: 2025-11-29 06:33:15.188243118 +0000 UTC m=+0.148181204 container health_status b3f42e9a710907b47913576d27471d163da731262c1464357cff24681ce600c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller)
Nov 29 06:33:15 compute-0 sudo[164982]: pam_unix(sudo:session): session closed for user root
Nov 29 06:33:15 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:33:15 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:33:15 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:33:15.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:33:15 compute-0 ceph-mon[74654]: pgmap v624: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:33:16 compute-0 sudo[165188]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bivqsqqyyxnrgxnrctrzxjnhlazbnzff ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397995.7511532-937-159329309628970/AnsiballZ_setup.py'
Nov 29 06:33:16 compute-0 sudo[165188]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:33:16 compute-0 podman[165150]: 2025-11-29 06:33:16.200381913 +0000 UTC m=+0.114629276 container health_status 81ea2bcb89266a0110a379c2083d8cc042460d4a35c8ed3bf349dd1083925000 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Nov 29 06:33:16 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v625: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:33:16 compute-0 sshd-session[165025]: Invalid user train1 from 118.193.39.127 port 46430
Nov 29 06:33:16 compute-0 python3.9[165196]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 06:33:16 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:33:16 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:33:16 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:33:16.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:33:16 compute-0 sshd-session[165025]: Received disconnect from 118.193.39.127 port 46430:11: Bye Bye [preauth]
Nov 29 06:33:16 compute-0 sshd-session[165025]: Disconnected from invalid user train1 118.193.39.127 port 46430 [preauth]
Nov 29 06:33:16 compute-0 sudo[165188]: pam_unix(sudo:session): session closed for user root
Nov 29 06:33:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:33:17.216 157767 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 06:33:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:33:17.217 157767 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 06:33:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:33:17.218 157767 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 06:33:17 compute-0 sudo[165279]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gulpfemalidstkdgrstqftbsduouxlro ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764397995.7511532-937-159329309628970/AnsiballZ_dnf.py'
Nov 29 06:33:17 compute-0 sudo[165279]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:33:17 compute-0 python3.9[165281]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 06:33:17 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:33:17 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:33:17 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:33:17.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:33:17 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:33:18 compute-0 ceph-mon[74654]: pgmap v625: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:33:18 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v626: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:33:18 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:33:18 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:33:18 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:33:18.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:33:19 compute-0 ceph-mon[74654]: pgmap v626: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:33:19 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:33:19 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:33:19 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:33:19.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:33:20 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v627: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:33:20 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:33:20 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:33:20 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:33:20.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:33:21 compute-0 ceph-mon[74654]: pgmap v627: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:33:21 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:33:21 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:33:21 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:33:21.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:33:22 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v628: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:33:22 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:33:22 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:33:22 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:33:22 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:33:22.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:33:23 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:33:23 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:33:23 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:33:23.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:33:24 compute-0 ceph-mon[74654]: pgmap v628: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:33:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:33:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:33:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:33:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:33:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:33:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:33:24 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v629: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:33:24 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:33:24 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:33:24 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:33:24.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:33:25 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:33:25 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:33:25 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:33:25.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:33:25 compute-0 ceph-mon[74654]: pgmap v629: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:33:26 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v630: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:33:26 compute-0 sudo[165297]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:33:26 compute-0 sudo[165297]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:33:26 compute-0 sudo[165297]: pam_unix(sudo:session): session closed for user root
Nov 29 06:33:26 compute-0 sudo[165322]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:33:26 compute-0 sudo[165322]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:33:26 compute-0 sudo[165322]: pam_unix(sudo:session): session closed for user root
Nov 29 06:33:26 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:33:26 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:33:26 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:33:26.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:33:27 compute-0 ceph-mon[74654]: pgmap v630: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:33:27 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:33:27 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:33:27 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:33:27.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:33:27 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:33:28 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v631: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:33:28 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:33:28 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:33:28 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:33:28.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:33:29 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:33:29 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:33:29 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:33:29.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:33:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 06:33:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 06:33:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 06:33:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 06:33:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 06:33:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 06:33:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 06:33:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 06:33:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 06:33:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 06:33:29 compute-0 ceph-mon[74654]: pgmap v631: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:33:30 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v632: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:33:30 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:33:30 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:33:30 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:33:30.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:33:31 compute-0 ceph-mon[74654]: pgmap v632: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:33:31 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:33:31 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:33:31 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:33:31.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:33:32 compute-0 sshd-session[165349]: Invalid user in from 115.190.37.201 port 40220
Nov 29 06:33:32 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v633: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:33:32 compute-0 sshd-session[165349]: Received disconnect from 115.190.37.201 port 40220:11: Bye Bye [preauth]
Nov 29 06:33:32 compute-0 sshd-session[165349]: Disconnected from invalid user in 115.190.37.201 port 40220 [preauth]
Nov 29 06:33:32 compute-0 sshd-session[165352]: Invalid user janice from 49.247.35.31 port 11644
Nov 29 06:33:32 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:33:32 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:33:32 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:33:32 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:33:32.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:33:32 compute-0 sshd-session[165352]: Received disconnect from 49.247.35.31 port 11644:11: Bye Bye [preauth]
Nov 29 06:33:32 compute-0 sshd-session[165352]: Disconnected from invalid user janice 49.247.35.31 port 11644 [preauth]
Nov 29 06:33:33 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:33:33 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:33:33 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:33:33.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:33:34 compute-0 ceph-mon[74654]: pgmap v633: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:33:34 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v634: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:33:34 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:33:34 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:33:34 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:33:34.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:33:35 compute-0 ceph-mon[74654]: pgmap v634: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:33:35 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:33:35 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:33:35 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:33:35.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:33:36 compute-0 sshd-session[165404]: Invalid user guest123 from 79.116.35.29 port 42096
Nov 29 06:33:36 compute-0 sshd-session[165404]: Received disconnect from 79.116.35.29 port 42096:11: Bye Bye [preauth]
Nov 29 06:33:36 compute-0 sshd-session[165404]: Disconnected from invalid user guest123 79.116.35.29 port 42096 [preauth]
Nov 29 06:33:36 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v635: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:33:36 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:33:36 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:33:36 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:33:36.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:33:37 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:33:37 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:33:37 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:33:37.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:33:37 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:33:37 compute-0 ceph-mon[74654]: pgmap v635: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:33:38 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v636: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:33:38 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:33:38 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:33:38 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:33:38.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:33:38 compute-0 ceph-mon[74654]: pgmap v636: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:33:39 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:33:39 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:33:39 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:33:39.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:33:40 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v637: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:33:40 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:33:40 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:33:40 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:33:40.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:33:41 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:33:41 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:33:41 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:33:41.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:33:42 compute-0 ceph-mon[74654]: pgmap v637: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:33:42 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v638: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:33:42 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:33:42 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:33:42 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:33:42 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:33:42.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:33:43 compute-0 ceph-mon[74654]: pgmap v638: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:33:43 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:33:43 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:33:43 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:33:43.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:33:44 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v639: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:33:44 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:33:44 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:33:44 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:33:44.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:33:45 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:33:45 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:33:45 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:33:45.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:33:46 compute-0 ceph-mon[74654]: pgmap v639: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:33:46 compute-0 podman[165540]: 2025-11-29 06:33:46.182388215 +0000 UTC m=+0.134955456 container health_status b3f42e9a710907b47913576d27471d163da731262c1464357cff24681ce600c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Nov 29 06:33:46 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v640: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:33:46 compute-0 sudo[165570]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:33:46 compute-0 sudo[165570]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:33:46 compute-0 sudo[165570]: pam_unix(sudo:session): session closed for user root
Nov 29 06:33:46 compute-0 sudo[165601]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:33:46 compute-0 sudo[165601]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:33:46 compute-0 sudo[165601]: pam_unix(sudo:session): session closed for user root
Nov 29 06:33:46 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:33:46 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:33:46 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:33:46.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:33:46 compute-0 podman[165594]: 2025-11-29 06:33:46.768047267 +0000 UTC m=+0.066020107 container health_status 81ea2bcb89266a0110a379c2083d8cc042460d4a35c8ed3bf349dd1083925000 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 06:33:47 compute-0 sshd-session[165537]: Received disconnect from 103.147.159.91 port 53694:11: Bye Bye [preauth]
Nov 29 06:33:47 compute-0 sshd-session[165537]: Disconnected from authenticating user root 103.147.159.91 port 53694 [preauth]
Nov 29 06:33:47 compute-0 ceph-mon[74654]: pgmap v640: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:33:47 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:33:47 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:33:47 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:33:47.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:33:48 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v641: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:33:48 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:33:48 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:33:48 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:33:48 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:33:48.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:33:49 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:33:49 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:33:49 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:33:49.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:33:49 compute-0 ceph-mon[74654]: pgmap v641: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:33:50 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v642: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:33:50 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:33:50 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:33:50 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:33:50.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:33:51 compute-0 ceph-mon[74654]: pgmap v642: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:33:51 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:33:51 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:33:51 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:33:51.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:33:52 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v643: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:33:52 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:33:52 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:33:52 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:33:52.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:33:53 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:33:53 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:33:53 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:33:53.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:33:53 compute-0 ceph-mon[74654]: pgmap v643: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:33:53 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:33:53 compute-0 sshd-session[165645]: Invalid user nginx from 104.208.108.166 port 38748
Nov 29 06:33:54 compute-0 sshd-session[165643]: Invalid user hadoop from 103.63.25.115 port 41922
Nov 29 06:33:54 compute-0 sshd-session[165645]: Received disconnect from 104.208.108.166 port 38748:11: Bye Bye [preauth]
Nov 29 06:33:54 compute-0 sshd-session[165645]: Disconnected from invalid user nginx 104.208.108.166 port 38748 [preauth]
Nov 29 06:33:54 compute-0 ceph-mgr[74948]: [balancer INFO root] Optimize plan auto_2025-11-29_06:33:54
Nov 29 06:33:54 compute-0 ceph-mgr[74948]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 06:33:54 compute-0 ceph-mgr[74948]: [balancer INFO root] do_upmap
Nov 29 06:33:54 compute-0 ceph-mgr[74948]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.meta', 'default.rgw.meta', '.mgr', 'images', 'vms', 'default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.data', 'backups', '.rgw.root']
Nov 29 06:33:54 compute-0 ceph-mgr[74948]: [balancer INFO root] prepared 0/10 changes
Nov 29 06:33:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:33:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:33:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:33:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:33:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:33:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:33:54 compute-0 sshd-session[165643]: Received disconnect from 103.63.25.115 port 41922:11: Bye Bye [preauth]
Nov 29 06:33:54 compute-0 sshd-session[165643]: Disconnected from invalid user hadoop 103.63.25.115 port 41922 [preauth]
Nov 29 06:33:54 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v644: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:33:54 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:33:54 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:33:54 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:33:54.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:33:55 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:33:55 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:33:55 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:33:55.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:33:55 compute-0 ceph-mon[74654]: pgmap v644: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:33:56 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v645: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:33:56 compute-0 kernel: SELinux:  Converting 2771 SID table entries...
Nov 29 06:33:56 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Nov 29 06:33:56 compute-0 kernel: SELinux:  policy capability open_perms=1
Nov 29 06:33:56 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Nov 29 06:33:56 compute-0 kernel: SELinux:  policy capability always_check_network=0
Nov 29 06:33:56 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 29 06:33:56 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 29 06:33:56 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 29 06:33:56 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:33:56 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:33:56 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:33:56.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:33:57 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:33:57 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:33:57 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:33:57.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:33:57 compute-0 ceph-mon[74654]: pgmap v645: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:33:58 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v646: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:33:58 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:33:58 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:33:58 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:33:58 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:33:58.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:33:59 compute-0 ceph-mon[74654]: pgmap v646: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:33:59 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:33:59 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:33:59 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:33:59.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:34:00 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v647: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:34:00 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:34:00 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:34:00 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:34:00.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:34:01 compute-0 ceph-mon[74654]: pgmap v647: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:34:01 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:34:01 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:34:01 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:34:01.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:34:02 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v648: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:34:02 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:34:02 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:34:02 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:34:02.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:34:03 compute-0 ceph-mon[74654]: pgmap v648: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:34:03 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:34:03 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:34:03 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:34:03.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:34:03 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:34:04 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v649: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:34:04 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:34:04 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:34:04 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:34:04.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:34:04 compute-0 sshd-session[165661]: Invalid user userb from 162.214.92.14 port 58996
Nov 29 06:34:04 compute-0 sshd-session[165661]: Received disconnect from 162.214.92.14 port 58996:11: Bye Bye [preauth]
Nov 29 06:34:04 compute-0 sshd-session[165661]: Disconnected from invalid user userb 162.214.92.14 port 58996 [preauth]
Nov 29 06:34:05 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:34:05 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:34:05 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:34:05.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:34:05 compute-0 ceph-mon[74654]: pgmap v649: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:34:06 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v650: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:34:06 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:34:06 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:34:06 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:34:06.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:34:06 compute-0 sudo[165670]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:34:06 compute-0 dbus-broker-launch[778]: avc:  op=load_policy lsm=selinux seqno=12 res=1
Nov 29 06:34:06 compute-0 sudo[165670]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:34:06 compute-0 sudo[165670]: pam_unix(sudo:session): session closed for user root
Nov 29 06:34:06 compute-0 sudo[165695]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:34:06 compute-0 sudo[165695]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:34:06 compute-0 sudo[165695]: pam_unix(sudo:session): session closed for user root
Nov 29 06:34:07 compute-0 sshd-session[165668]: Received disconnect from 193.46.255.7 port 16208:11:  [preauth]
Nov 29 06:34:07 compute-0 sshd-session[165668]: Disconnected from authenticating user root 193.46.255.7 port 16208 [preauth]
Nov 29 06:34:07 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:34:07 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:34:07 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:34:07.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:34:07 compute-0 sshd-session[165707]: Invalid user mcserver from 176.109.67.96 port 45134
Nov 29 06:34:07 compute-0 sshd-session[165707]: Received disconnect from 176.109.67.96 port 45134:11: Bye Bye [preauth]
Nov 29 06:34:07 compute-0 sshd-session[165707]: Disconnected from invalid user mcserver 176.109.67.96 port 45134 [preauth]
Nov 29 06:34:08 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v651: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:34:08 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:34:08 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:34:08 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:34:08 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:34:08.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:34:09 compute-0 ceph-mon[74654]: pgmap v650: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:34:09 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:34:09 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:34:09 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:34:09.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:34:10 compute-0 kernel: SELinux:  Converting 2771 SID table entries...
Nov 29 06:34:10 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Nov 29 06:34:10 compute-0 kernel: SELinux:  policy capability open_perms=1
Nov 29 06:34:10 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Nov 29 06:34:10 compute-0 kernel: SELinux:  policy capability always_check_network=0
Nov 29 06:34:10 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 29 06:34:10 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 29 06:34:10 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 29 06:34:10 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v652: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:34:10 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:34:10 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:34:10 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:34:10.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:34:10 compute-0 ceph-mon[74654]: pgmap v651: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:34:11 compute-0 sudo[165728]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:34:11 compute-0 dbus-broker-launch[778]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Nov 29 06:34:11 compute-0 sudo[165728]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:34:11 compute-0 sudo[165728]: pam_unix(sudo:session): session closed for user root
Nov 29 06:34:11 compute-0 sudo[165753]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:34:11 compute-0 sudo[165753]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:34:11 compute-0 sudo[165753]: pam_unix(sudo:session): session closed for user root
Nov 29 06:34:11 compute-0 sudo[165778]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:34:11 compute-0 sudo[165778]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:34:11 compute-0 sudo[165778]: pam_unix(sudo:session): session closed for user root
Nov 29 06:34:11 compute-0 sudo[165803]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 06:34:11 compute-0 sudo[165803]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:34:11 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:34:11 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:34:11 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:34:11.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:34:11 compute-0 sudo[165803]: pam_unix(sudo:session): session closed for user root
Nov 29 06:34:12 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 06:34:12 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:34:12 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 06:34:12 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 06:34:12 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 06:34:12 compute-0 ceph-mon[74654]: pgmap v652: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:34:12 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v653: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:34:12 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:34:12 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev 5e0da141-7b32-4f52-b80f-47e56d5b6028 does not exist
Nov 29 06:34:12 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev 482a086a-84b7-4e02-8696-9bd5a9742c88 does not exist
Nov 29 06:34:12 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev 23119597-ad30-49e6-b54f-bca1dd7836fa does not exist
Nov 29 06:34:12 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 06:34:12 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 06:34:12 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 06:34:12 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 06:34:12 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 06:34:12 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:34:12 compute-0 sudo[165861]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:34:12 compute-0 sudo[165861]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:34:12 compute-0 sudo[165861]: pam_unix(sudo:session): session closed for user root
Nov 29 06:34:12 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:34:12 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:34:12 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:34:12.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:34:12 compute-0 sudo[165886]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:34:12 compute-0 sudo[165886]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:34:12 compute-0 sudo[165886]: pam_unix(sudo:session): session closed for user root
Nov 29 06:34:12 compute-0 sudo[165911]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:34:12 compute-0 sudo[165911]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:34:12 compute-0 sudo[165911]: pam_unix(sudo:session): session closed for user root
Nov 29 06:34:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 06:34:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:34:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 06:34:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:34:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:34:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:34:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:34:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:34:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:34:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:34:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:34:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:34:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 06:34:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:34:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:34:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:34:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Nov 29 06:34:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:34:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 06:34:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:34:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:34:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:34:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 06:34:12 compute-0 sudo[165936]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Nov 29 06:34:12 compute-0 sudo[165936]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:34:13 compute-0 podman[166003]: 2025-11-29 06:34:13.295556664 +0000 UTC m=+0.118088145 container create b547932a68b45cb9e59c7a5dad5bcce13b6d6df15c592457341cc969c0984beb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_kirch, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 29 06:34:13 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:34:13 compute-0 podman[166003]: 2025-11-29 06:34:13.204234915 +0000 UTC m=+0.026766366 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:34:13 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 06:34:13 compute-0 ceph-mon[74654]: pgmap v653: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:34:13 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:34:13 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 06:34:13 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 06:34:13 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:34:13 compute-0 systemd[1]: Started libpod-conmon-b547932a68b45cb9e59c7a5dad5bcce13b6d6df15c592457341cc969c0984beb.scope.
Nov 29 06:34:13 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:34:13 compute-0 podman[166003]: 2025-11-29 06:34:13.467564808 +0000 UTC m=+0.290096289 container init b547932a68b45cb9e59c7a5dad5bcce13b6d6df15c592457341cc969c0984beb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_kirch, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 06:34:13 compute-0 podman[166003]: 2025-11-29 06:34:13.480859617 +0000 UTC m=+0.303391068 container start b547932a68b45cb9e59c7a5dad5bcce13b6d6df15c592457341cc969c0984beb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_kirch, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 06:34:13 compute-0 determined_kirch[166018]: 167 167
Nov 29 06:34:13 compute-0 systemd[1]: libpod-b547932a68b45cb9e59c7a5dad5bcce13b6d6df15c592457341cc969c0984beb.scope: Deactivated successfully.
Nov 29 06:34:13 compute-0 podman[166003]: 2025-11-29 06:34:13.562269643 +0000 UTC m=+0.384801124 container attach b547932a68b45cb9e59c7a5dad5bcce13b6d6df15c592457341cc969c0984beb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_kirch, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 06:34:13 compute-0 podman[166003]: 2025-11-29 06:34:13.562942842 +0000 UTC m=+0.385474293 container died b547932a68b45cb9e59c7a5dad5bcce13b6d6df15c592457341cc969c0984beb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_kirch, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 06:34:13 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:34:13 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:34:13 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:34:13.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:34:13 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:34:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-c1e3d042271d9109a20c1af675297af8916f0aa265268002958bfdc18c5d2a88-merged.mount: Deactivated successfully.
Nov 29 06:34:14 compute-0 podman[166003]: 2025-11-29 06:34:14.300033599 +0000 UTC m=+1.122565090 container remove b547932a68b45cb9e59c7a5dad5bcce13b6d6df15c592457341cc969c0984beb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_kirch, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 29 06:34:14 compute-0 systemd[1]: libpod-conmon-b547932a68b45cb9e59c7a5dad5bcce13b6d6df15c592457341cc969c0984beb.scope: Deactivated successfully.
Nov 29 06:34:14 compute-0 sshd-session[165859]: Invalid user admin123 from 27.112.78.245 port 43778
Nov 29 06:34:14 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v654: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:34:14 compute-0 podman[166045]: 2025-11-29 06:34:14.558571175 +0000 UTC m=+0.080996755 container create aa5d054c1f51d8269a1bdb350c5181991c4c35d6c94e943436e58f1af266d735 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_curie, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 06:34:14 compute-0 podman[166045]: 2025-11-29 06:34:14.521512066 +0000 UTC m=+0.043937656 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:34:14 compute-0 sshd-session[165859]: Received disconnect from 27.112.78.245 port 43778:11: Bye Bye [preauth]
Nov 29 06:34:14 compute-0 sshd-session[165859]: Disconnected from invalid user admin123 27.112.78.245 port 43778 [preauth]
Nov 29 06:34:14 compute-0 systemd[1]: Started libpod-conmon-aa5d054c1f51d8269a1bdb350c5181991c4c35d6c94e943436e58f1af266d735.scope.
Nov 29 06:34:14 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:34:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32331226be33a27048904908fff2178554db3b45e9850d354179b322efa7fbb8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 06:34:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32331226be33a27048904908fff2178554db3b45e9850d354179b322efa7fbb8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:34:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32331226be33a27048904908fff2178554db3b45e9850d354179b322efa7fbb8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:34:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32331226be33a27048904908fff2178554db3b45e9850d354179b322efa7fbb8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 06:34:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32331226be33a27048904908fff2178554db3b45e9850d354179b322efa7fbb8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 06:34:14 compute-0 podman[166045]: 2025-11-29 06:34:14.805511439 +0000 UTC m=+0.327936989 container init aa5d054c1f51d8269a1bdb350c5181991c4c35d6c94e943436e58f1af266d735 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_curie, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 06:34:14 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:34:14 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:34:14 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:34:14.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:34:14 compute-0 podman[166045]: 2025-11-29 06:34:14.814412884 +0000 UTC m=+0.336838444 container start aa5d054c1f51d8269a1bdb350c5181991c4c35d6c94e943436e58f1af266d735 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_curie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default)
Nov 29 06:34:14 compute-0 podman[166045]: 2025-11-29 06:34:14.819992703 +0000 UTC m=+0.342418263 container attach aa5d054c1f51d8269a1bdb350c5181991c4c35d6c94e943436e58f1af266d735 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_curie, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 29 06:34:15 compute-0 eloquent_curie[166061]: --> passed data devices: 0 physical, 1 LVM
Nov 29 06:34:15 compute-0 eloquent_curie[166061]: --> relative data size: 1.0
Nov 29 06:34:15 compute-0 eloquent_curie[166061]: --> All data devices are unavailable
Nov 29 06:34:15 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:34:15 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:34:15 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:34:15.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:34:15 compute-0 systemd[1]: libpod-aa5d054c1f51d8269a1bdb350c5181991c4c35d6c94e943436e58f1af266d735.scope: Deactivated successfully.
Nov 29 06:34:15 compute-0 podman[166045]: 2025-11-29 06:34:15.605146674 +0000 UTC m=+1.127572234 container died aa5d054c1f51d8269a1bdb350c5181991c4c35d6c94e943436e58f1af266d735 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_curie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 06:34:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-32331226be33a27048904908fff2178554db3b45e9850d354179b322efa7fbb8-merged.mount: Deactivated successfully.
Nov 29 06:34:16 compute-0 ceph-mon[74654]: pgmap v654: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:34:16 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v655: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:34:16 compute-0 podman[166045]: 2025-11-29 06:34:16.598686467 +0000 UTC m=+2.121112027 container remove aa5d054c1f51d8269a1bdb350c5181991c4c35d6c94e943436e58f1af266d735 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_curie, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 29 06:34:16 compute-0 systemd[1]: libpod-conmon-aa5d054c1f51d8269a1bdb350c5181991c4c35d6c94e943436e58f1af266d735.scope: Deactivated successfully.
Nov 29 06:34:16 compute-0 sudo[165936]: pam_unix(sudo:session): session closed for user root
Nov 29 06:34:16 compute-0 sudo[166097]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:34:16 compute-0 sudo[166097]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:34:16 compute-0 sudo[166097]: pam_unix(sudo:session): session closed for user root
Nov 29 06:34:16 compute-0 sudo[166135]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:34:16 compute-0 sudo[166135]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:34:16 compute-0 sudo[166135]: pam_unix(sudo:session): session closed for user root
Nov 29 06:34:16 compute-0 podman[166091]: 2025-11-29 06:34:16.786424441 +0000 UTC m=+0.134248687 container health_status b3f42e9a710907b47913576d27471d163da731262c1464357cff24681ce600c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 29 06:34:16 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:34:16 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:34:16 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:34:16.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:34:16 compute-0 sudo[166167]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:34:16 compute-0 sudo[166167]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:34:16 compute-0 sudo[166167]: pam_unix(sudo:session): session closed for user root
Nov 29 06:34:16 compute-0 podman[166168]: 2025-11-29 06:34:16.878674116 +0000 UTC m=+0.056565737 container health_status 81ea2bcb89266a0110a379c2083d8cc042460d4a35c8ed3bf349dd1083925000 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 06:34:16 compute-0 sudo[166208]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -- lvm list --format json
Nov 29 06:34:16 compute-0 sudo[166208]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:34:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:34:17.217 157767 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 06:34:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:34:17.218 157767 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 06:34:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:34:17.218 157767 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 06:34:17 compute-0 podman[166278]: 2025-11-29 06:34:17.242249113 +0000 UTC m=+0.021415483 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:34:17 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:34:17 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:34:17 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:34:17.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:34:17 compute-0 podman[166278]: 2025-11-29 06:34:17.696309114 +0000 UTC m=+0.475475434 container create 464853d14779e7736e387e51b18e309994494ebeec9acf2996d942924102b206 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_cori, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True)
Nov 29 06:34:17 compute-0 systemd[1]: Started libpod-conmon-464853d14779e7736e387e51b18e309994494ebeec9acf2996d942924102b206.scope.
Nov 29 06:34:17 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:34:18 compute-0 ceph-mon[74654]: pgmap v655: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:34:18 compute-0 podman[166278]: 2025-11-29 06:34:18.199167539 +0000 UTC m=+0.978333869 container init 464853d14779e7736e387e51b18e309994494ebeec9acf2996d942924102b206 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_cori, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 29 06:34:18 compute-0 podman[166278]: 2025-11-29 06:34:18.207737214 +0000 UTC m=+0.986903534 container start 464853d14779e7736e387e51b18e309994494ebeec9acf2996d942924102b206 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_cori, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Nov 29 06:34:18 compute-0 sad_cori[166294]: 167 167
Nov 29 06:34:18 compute-0 systemd[1]: libpod-464853d14779e7736e387e51b18e309994494ebeec9acf2996d942924102b206.scope: Deactivated successfully.
Nov 29 06:34:18 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v656: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:34:18 compute-0 podman[166278]: 2025-11-29 06:34:18.43966291 +0000 UTC m=+1.218829230 container attach 464853d14779e7736e387e51b18e309994494ebeec9acf2996d942924102b206 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_cori, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True)
Nov 29 06:34:18 compute-0 podman[166278]: 2025-11-29 06:34:18.440647268 +0000 UTC m=+1.219813608 container died 464853d14779e7736e387e51b18e309994494ebeec9acf2996d942924102b206 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_cori, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 29 06:34:18 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:34:18 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:34:18 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:34:18 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:34:18.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:34:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-7086dc547eb7aca5a2af3783860155700c4387b20c7d8f35006f20b5c5e5db7a-merged.mount: Deactivated successfully.
Nov 29 06:34:19 compute-0 podman[166278]: 2025-11-29 06:34:19.029620224 +0000 UTC m=+1.808786584 container remove 464853d14779e7736e387e51b18e309994494ebeec9acf2996d942924102b206 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_cori, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef)
Nov 29 06:34:19 compute-0 systemd[1]: libpod-conmon-464853d14779e7736e387e51b18e309994494ebeec9acf2996d942924102b206.scope: Deactivated successfully.
Nov 29 06:34:19 compute-0 podman[166321]: 2025-11-29 06:34:19.191983462 +0000 UTC m=+0.028700761 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:34:19 compute-0 podman[166321]: 2025-11-29 06:34:19.443558189 +0000 UTC m=+0.280275488 container create bf2b276c11e7086a75d86e466c5344ad8cf33a11a8d3222c452e57784e226535 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_montalcini, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 29 06:34:19 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:34:19 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:34:19 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:34:19.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:34:20 compute-0 systemd[1]: Started libpod-conmon-bf2b276c11e7086a75d86e466c5344ad8cf33a11a8d3222c452e57784e226535.scope.
Nov 29 06:34:20 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:34:20 compute-0 ceph-mon[74654]: pgmap v656: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:34:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bc9294f4ab3b37bddcfdd7ea8eaa7f85e7b796d1bc1a710b56a57970c18c35b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 06:34:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bc9294f4ab3b37bddcfdd7ea8eaa7f85e7b796d1bc1a710b56a57970c18c35b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:34:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bc9294f4ab3b37bddcfdd7ea8eaa7f85e7b796d1bc1a710b56a57970c18c35b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:34:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bc9294f4ab3b37bddcfdd7ea8eaa7f85e7b796d1bc1a710b56a57970c18c35b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 06:34:20 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v657: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:34:20 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:34:20 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:34:20 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:34:20.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:34:20 compute-0 podman[166321]: 2025-11-29 06:34:20.82763202 +0000 UTC m=+1.664349369 container init bf2b276c11e7086a75d86e466c5344ad8cf33a11a8d3222c452e57784e226535 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_montalcini, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 06:34:20 compute-0 podman[166321]: 2025-11-29 06:34:20.839040796 +0000 UTC m=+1.675758125 container start bf2b276c11e7086a75d86e466c5344ad8cf33a11a8d3222c452e57784e226535 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_montalcini, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 06:34:21 compute-0 podman[166321]: 2025-11-29 06:34:21.363796786 +0000 UTC m=+2.200514125 container attach bf2b276c11e7086a75d86e466c5344ad8cf33a11a8d3222c452e57784e226535 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_montalcini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef)
Nov 29 06:34:21 compute-0 ceph-mon[74654]: pgmap v657: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:34:21 compute-0 festive_montalcini[166338]: {
Nov 29 06:34:21 compute-0 festive_montalcini[166338]:     "1": [
Nov 29 06:34:21 compute-0 festive_montalcini[166338]:         {
Nov 29 06:34:21 compute-0 festive_montalcini[166338]:             "devices": [
Nov 29 06:34:21 compute-0 festive_montalcini[166338]:                 "/dev/loop3"
Nov 29 06:34:21 compute-0 festive_montalcini[166338]:             ],
Nov 29 06:34:21 compute-0 festive_montalcini[166338]:             "lv_name": "ceph_lv0",
Nov 29 06:34:21 compute-0 festive_montalcini[166338]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 06:34:21 compute-0 festive_montalcini[166338]:             "lv_size": "7511998464",
Nov 29 06:34:21 compute-0 festive_montalcini[166338]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=336ec58c-893b-528f-a0c1-6ed1196bc047,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=91f280f1-e534-4adc-bf70-98711580c2dd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 06:34:21 compute-0 festive_montalcini[166338]:             "lv_uuid": "G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP",
Nov 29 06:34:21 compute-0 festive_montalcini[166338]:             "name": "ceph_lv0",
Nov 29 06:34:21 compute-0 festive_montalcini[166338]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 06:34:21 compute-0 festive_montalcini[166338]:             "tags": {
Nov 29 06:34:21 compute-0 festive_montalcini[166338]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 06:34:21 compute-0 festive_montalcini[166338]:                 "ceph.block_uuid": "G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP",
Nov 29 06:34:21 compute-0 festive_montalcini[166338]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 06:34:21 compute-0 festive_montalcini[166338]:                 "ceph.cluster_fsid": "336ec58c-893b-528f-a0c1-6ed1196bc047",
Nov 29 06:34:21 compute-0 festive_montalcini[166338]:                 "ceph.cluster_name": "ceph",
Nov 29 06:34:21 compute-0 festive_montalcini[166338]:                 "ceph.crush_device_class": "",
Nov 29 06:34:21 compute-0 festive_montalcini[166338]:                 "ceph.encrypted": "0",
Nov 29 06:34:21 compute-0 festive_montalcini[166338]:                 "ceph.osd_fsid": "91f280f1-e534-4adc-bf70-98711580c2dd",
Nov 29 06:34:21 compute-0 festive_montalcini[166338]:                 "ceph.osd_id": "1",
Nov 29 06:34:21 compute-0 festive_montalcini[166338]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 06:34:21 compute-0 festive_montalcini[166338]:                 "ceph.type": "block",
Nov 29 06:34:21 compute-0 festive_montalcini[166338]:                 "ceph.vdo": "0"
Nov 29 06:34:21 compute-0 festive_montalcini[166338]:             },
Nov 29 06:34:21 compute-0 festive_montalcini[166338]:             "type": "block",
Nov 29 06:34:21 compute-0 festive_montalcini[166338]:             "vg_name": "ceph_vg0"
Nov 29 06:34:21 compute-0 festive_montalcini[166338]:         }
Nov 29 06:34:21 compute-0 festive_montalcini[166338]:     ]
Nov 29 06:34:21 compute-0 festive_montalcini[166338]: }
Nov 29 06:34:21 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:34:21 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:34:21 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:34:21.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:34:21 compute-0 systemd[1]: libpod-bf2b276c11e7086a75d86e466c5344ad8cf33a11a8d3222c452e57784e226535.scope: Deactivated successfully.
Nov 29 06:34:21 compute-0 podman[166321]: 2025-11-29 06:34:21.617838654 +0000 UTC m=+2.454555993 container died bf2b276c11e7086a75d86e466c5344ad8cf33a11a8d3222c452e57784e226535 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_montalcini, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 06:34:22 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v658: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:34:22 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:34:22 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:34:22 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:34:22.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:34:23 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:34:23 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:34:23 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:34:23.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:34:23 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:34:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:34:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:34:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:34:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:34:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:34:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:34:24 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v659: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:34:24 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:34:24 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:34:24 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:34:24.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:34:25 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:34:25 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:34:25 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:34:25.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:34:26 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v660: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:34:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-9bc9294f4ab3b37bddcfdd7ea8eaa7f85e7b796d1bc1a710b56a57970c18c35b-merged.mount: Deactivated successfully.
Nov 29 06:34:26 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:34:26 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:34:26 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:34:26.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:34:27 compute-0 sudo[167428]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:34:27 compute-0 sudo[167428]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:34:27 compute-0 sudo[167428]: pam_unix(sudo:session): session closed for user root
Nov 29 06:34:27 compute-0 sudo[167503]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:34:27 compute-0 sudo[167503]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:34:27 compute-0 sudo[167503]: pam_unix(sudo:session): session closed for user root
Nov 29 06:34:27 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:34:27 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:34:27 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:34:27.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:34:28 compute-0 podman[166321]: 2025-11-29 06:34:28.087094096 +0000 UTC m=+8.923811395 container remove bf2b276c11e7086a75d86e466c5344ad8cf33a11a8d3222c452e57784e226535 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_montalcini, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 06:34:28 compute-0 ceph-mon[74654]: pgmap v658: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:34:28 compute-0 sudo[166208]: pam_unix(sudo:session): session closed for user root
Nov 29 06:34:28 compute-0 systemd[1]: libpod-conmon-bf2b276c11e7086a75d86e466c5344ad8cf33a11a8d3222c452e57784e226535.scope: Deactivated successfully.
Nov 29 06:34:28 compute-0 sudo[168118]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:34:28 compute-0 sudo[168118]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:34:28 compute-0 sudo[168118]: pam_unix(sudo:session): session closed for user root
Nov 29 06:34:28 compute-0 sshd-session[167358]: Received disconnect from 118.193.39.127 port 53422:11: Bye Bye [preauth]
Nov 29 06:34:28 compute-0 sshd-session[167358]: Disconnected from authenticating user root 118.193.39.127 port 53422 [preauth]
Nov 29 06:34:28 compute-0 sudo[168183]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:34:28 compute-0 sudo[168183]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:34:28 compute-0 sudo[168183]: pam_unix(sudo:session): session closed for user root
Nov 29 06:34:28 compute-0 sudo[168250]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:34:28 compute-0 sudo[168250]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:34:28 compute-0 sudo[168250]: pam_unix(sudo:session): session closed for user root
Nov 29 06:34:28 compute-0 sudo[168308]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -- raw list --format json
Nov 29 06:34:28 compute-0 sudo[168308]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:34:28 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v661: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:34:28 compute-0 podman[168564]: 2025-11-29 06:34:28.704037978 +0000 UTC m=+0.050507223 container create a4fbafa9802ade5d936a227a0089273b259d40472eb532d12151d10567302f4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_ardinghelli, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 06:34:28 compute-0 podman[168564]: 2025-11-29 06:34:28.680597479 +0000 UTC m=+0.027066744 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:34:28 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:34:28 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:34:28 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:34:28 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:34:28.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:34:28 compute-0 systemd[1]: Started libpod-conmon-a4fbafa9802ade5d936a227a0089273b259d40472eb532d12151d10567302f4a.scope.
Nov 29 06:34:29 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:34:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 06:34:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 06:34:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 06:34:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 06:34:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 06:34:29 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:34:29 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:34:29 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:34:29.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:34:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 06:34:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 06:34:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 06:34:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 06:34:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 06:34:29 compute-0 podman[168564]: 2025-11-29 06:34:29.619588904 +0000 UTC m=+0.966058219 container init a4fbafa9802ade5d936a227a0089273b259d40472eb532d12151d10567302f4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_ardinghelli, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 06:34:29 compute-0 podman[168564]: 2025-11-29 06:34:29.632381819 +0000 UTC m=+0.978851104 container start a4fbafa9802ade5d936a227a0089273b259d40472eb532d12151d10567302f4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_ardinghelli, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 29 06:34:29 compute-0 tender_ardinghelli[168773]: 167 167
Nov 29 06:34:29 compute-0 systemd[1]: libpod-a4fbafa9802ade5d936a227a0089273b259d40472eb532d12151d10567302f4a.scope: Deactivated successfully.
Nov 29 06:34:30 compute-0 podman[168564]: 2025-11-29 06:34:30.024030947 +0000 UTC m=+1.370500232 container attach a4fbafa9802ade5d936a227a0089273b259d40472eb532d12151d10567302f4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_ardinghelli, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 06:34:30 compute-0 podman[168564]: 2025-11-29 06:34:30.024736818 +0000 UTC m=+1.371206103 container died a4fbafa9802ade5d936a227a0089273b259d40472eb532d12151d10567302f4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_ardinghelli, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 06:34:30 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v662: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:34:30 compute-0 ceph-mon[74654]: pgmap v659: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:34:30 compute-0 ceph-mon[74654]: pgmap v660: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:34:30 compute-0 ceph-mon[74654]: pgmap v661: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:34:30 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:34:30 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:34:30 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:34:30.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:34:31 compute-0 sshd-session[169527]: Received disconnect from 103.143.238.173 port 51272:11: Bye Bye [preauth]
Nov 29 06:34:31 compute-0 sshd-session[169527]: Disconnected from authenticating user root 103.143.238.173 port 51272 [preauth]
Nov 29 06:34:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-e7a7583cb7c93af5d273a861c6b8db11944d27243347aa74d5f946b40e5288d4-merged.mount: Deactivated successfully.
Nov 29 06:34:31 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:34:31 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:34:31 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:34:31.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:34:32 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v663: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:34:32 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:34:32 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:34:32 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:34:32.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:34:33 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:34:33 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:34:33 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:34:33.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:34:34 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:34:34 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v664: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:34:34 compute-0 ceph-mon[74654]: pgmap v662: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:34:34 compute-0 podman[168564]: 2025-11-29 06:34:34.691139196 +0000 UTC m=+6.037608441 container remove a4fbafa9802ade5d936a227a0089273b259d40472eb532d12151d10567302f4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_ardinghelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 06:34:34 compute-0 systemd[1]: libpod-conmon-a4fbafa9802ade5d936a227a0089273b259d40472eb532d12151d10567302f4a.scope: Deactivated successfully.
Nov 29 06:34:34 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:34:34 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:34:34 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:34:34.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:34:35 compute-0 podman[171900]: 2025-11-29 06:34:34.92497222 +0000 UTC m=+0.029519501 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:34:35 compute-0 podman[171900]: 2025-11-29 06:34:35.559696999 +0000 UTC m=+0.664244290 container create a3650dbf9e5bca35aa9e4ff3f9719986ed11e66613976879baa33f76a7fed128 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_dirac, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True)
Nov 29 06:34:35 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:34:35 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:34:35 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:34:35.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:34:36 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v665: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:34:36 compute-0 systemd[1]: Started libpod-conmon-a3650dbf9e5bca35aa9e4ff3f9719986ed11e66613976879baa33f76a7fed128.scope.
Nov 29 06:34:36 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:34:36 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:34:36 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:34:36.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:34:36 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:34:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da1f077d790b198b86c504577439c508e53074d22b665a6519b4ce036f9a0205/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 06:34:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da1f077d790b198b86c504577439c508e53074d22b665a6519b4ce036f9a0205/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:34:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da1f077d790b198b86c504577439c508e53074d22b665a6519b4ce036f9a0205/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:34:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da1f077d790b198b86c504577439c508e53074d22b665a6519b4ce036f9a0205/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 06:34:37 compute-0 ceph-mon[74654]: pgmap v663: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:34:37 compute-0 ceph-mon[74654]: pgmap v664: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:34:37 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:34:37 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:34:37 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:34:37.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:34:37 compute-0 podman[171900]: 2025-11-29 06:34:37.723206548 +0000 UTC m=+2.827753859 container init a3650dbf9e5bca35aa9e4ff3f9719986ed11e66613976879baa33f76a7fed128 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_dirac, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 06:34:37 compute-0 podman[171900]: 2025-11-29 06:34:37.735164793 +0000 UTC m=+2.839712074 container start a3650dbf9e5bca35aa9e4ff3f9719986ed11e66613976879baa33f76a7fed128 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_dirac, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 29 06:34:38 compute-0 podman[171900]: 2025-11-29 06:34:38.410617106 +0000 UTC m=+3.515164447 container attach a3650dbf9e5bca35aa9e4ff3f9719986ed11e66613976879baa33f76a7fed128 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_dirac, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 29 06:34:38 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v666: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:34:38 compute-0 jolly_dirac[172871]: {
Nov 29 06:34:38 compute-0 jolly_dirac[172871]:     "91f280f1-e534-4adc-bf70-98711580c2dd": {
Nov 29 06:34:38 compute-0 jolly_dirac[172871]:         "ceph_fsid": "336ec58c-893b-528f-a0c1-6ed1196bc047",
Nov 29 06:34:38 compute-0 jolly_dirac[172871]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 06:34:38 compute-0 jolly_dirac[172871]:         "osd_id": 1,
Nov 29 06:34:38 compute-0 jolly_dirac[172871]:         "osd_uuid": "91f280f1-e534-4adc-bf70-98711580c2dd",
Nov 29 06:34:38 compute-0 jolly_dirac[172871]:         "type": "bluestore"
Nov 29 06:34:38 compute-0 jolly_dirac[172871]:     }
Nov 29 06:34:38 compute-0 jolly_dirac[172871]: }
Nov 29 06:34:38 compute-0 systemd[1]: libpod-a3650dbf9e5bca35aa9e4ff3f9719986ed11e66613976879baa33f76a7fed128.scope: Deactivated successfully.
Nov 29 06:34:38 compute-0 podman[171900]: 2025-11-29 06:34:38.634864384 +0000 UTC m=+3.739411645 container died a3650dbf9e5bca35aa9e4ff3f9719986ed11e66613976879baa33f76a7fed128 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_dirac, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 06:34:38 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:34:38 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:34:38 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:34:38.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:34:39 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:34:39 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:34:39 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:34:39 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:34:39.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:34:39 compute-0 sshd-session[173940]: Invalid user localhost from 31.6.212.12 port 38480
Nov 29 06:34:40 compute-0 sshd-session[173940]: Received disconnect from 31.6.212.12 port 38480:11: Bye Bye [preauth]
Nov 29 06:34:40 compute-0 sshd-session[173940]: Disconnected from invalid user localhost 31.6.212.12 port 38480 [preauth]
Nov 29 06:34:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-da1f077d790b198b86c504577439c508e53074d22b665a6519b4ce036f9a0205-merged.mount: Deactivated successfully.
Nov 29 06:34:40 compute-0 ceph-mon[74654]: pgmap v665: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:34:40 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v667: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:34:40 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:34:40 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:34:40 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:34:40.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:34:41 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:34:41 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:34:41 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:34:41.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:34:42 compute-0 podman[171900]: 2025-11-29 06:34:42.159025979 +0000 UTC m=+7.263573230 container remove a3650dbf9e5bca35aa9e4ff3f9719986ed11e66613976879baa33f76a7fed128 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_dirac, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 29 06:34:42 compute-0 sudo[168308]: pam_unix(sudo:session): session closed for user root
Nov 29 06:34:42 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 06:34:42 compute-0 ceph-mon[74654]: pgmap v666: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:34:42 compute-0 systemd[1]: libpod-conmon-a3650dbf9e5bca35aa9e4ff3f9719986ed11e66613976879baa33f76a7fed128.scope: Deactivated successfully.
Nov 29 06:34:42 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v668: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:34:42 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:34:42 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 06:34:42 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:34:42 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev df6a06d4-81be-400b-abd6-eeb9f7eb311e does not exist
Nov 29 06:34:42 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev 68626a8a-066f-4c4a-98df-956a9b37e242 does not exist
Nov 29 06:34:42 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev 0c4c8572-f509-4d6f-9780-055b47a88c47 does not exist
Nov 29 06:34:42 compute-0 sudo[175633]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:34:42 compute-0 sudo[175633]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:34:42 compute-0 sudo[175633]: pam_unix(sudo:session): session closed for user root
Nov 29 06:34:42 compute-0 sudo[175701]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 06:34:42 compute-0 sudo[175701]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:34:42 compute-0 sudo[175701]: pam_unix(sudo:session): session closed for user root
Nov 29 06:34:42 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:34:42 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:34:42 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:34:42.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:34:43 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:34:43 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:34:43 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:34:43.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:34:44 compute-0 ceph-mon[74654]: pgmap v667: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:34:44 compute-0 ceph-mon[74654]: pgmap v668: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:34:44 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:34:44 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:34:44 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:34:44 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v669: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:34:44 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:34:44 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:34:44 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:34:44.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:34:45 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:34:45 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:34:45 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:34:45.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:34:46 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v670: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:34:46 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:34:46 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:34:46 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:34:46.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:34:47 compute-0 podman[178188]: 2025-11-29 06:34:47.123063311 +0000 UTC m=+0.070692957 container health_status 81ea2bcb89266a0110a379c2083d8cc042460d4a35c8ed3bf349dd1083925000 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 06:34:47 compute-0 podman[178196]: 2025-11-29 06:34:47.174352918 +0000 UTC m=+0.121724387 container health_status b3f42e9a710907b47913576d27471d163da731262c1464357cff24681ce600c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 29 06:34:47 compute-0 sudo[178294]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:34:47 compute-0 sudo[178294]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:34:47 compute-0 sudo[178294]: pam_unix(sudo:session): session closed for user root
Nov 29 06:34:47 compute-0 sudo[178369]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:34:47 compute-0 sudo[178369]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:34:47 compute-0 sudo[178369]: pam_unix(sudo:session): session closed for user root
Nov 29 06:34:47 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:34:47 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:34:47 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:34:47.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:34:48 compute-0 ceph-mon[74654]: pgmap v669: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:34:48 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v671: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:34:48 compute-0 sshd-session[173400]: error: kex_exchange_identification: read: Connection timed out
Nov 29 06:34:48 compute-0 sshd-session[173400]: banner exchange: Connection from 58.210.98.130 port 43634: Connection timed out
Nov 29 06:34:48 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:34:48 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:34:48 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:34:48.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:34:49 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:34:49 compute-0 ceph-mon[74654]: pgmap v670: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:34:49 compute-0 ceph-mon[74654]: pgmap v671: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:34:49 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:34:49 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:34:49 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:34:49.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:34:50 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v672: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:34:50 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:34:50 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:34:50 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:34:50.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:34:51 compute-0 ceph-mon[74654]: pgmap v672: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:34:51 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:34:51 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:34:51 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:34:51.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:34:52 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v673: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:34:52 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:34:52 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:34:52 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:34:52.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:34:53 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:34:53 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:34:53 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:34:53.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:34:53 compute-0 ceph-mon[74654]: pgmap v673: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:34:54 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:34:54 compute-0 ceph-mgr[74948]: [balancer INFO root] Optimize plan auto_2025-11-29_06:34:54
Nov 29 06:34:54 compute-0 ceph-mgr[74948]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 06:34:54 compute-0 ceph-mgr[74948]: [balancer INFO root] do_upmap
Nov 29 06:34:54 compute-0 ceph-mgr[74948]: [balancer INFO root] pools ['.mgr', 'default.rgw.log', '.rgw.root', 'vms', 'volumes', 'cephfs.cephfs.data', 'images', 'backups', 'default.rgw.control', 'cephfs.cephfs.meta', 'default.rgw.meta']
Nov 29 06:34:54 compute-0 ceph-mgr[74948]: [balancer INFO root] prepared 0/10 changes
Nov 29 06:34:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:34:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:34:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:34:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:34:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:34:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:34:54 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v674: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:34:54 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:34:54 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:34:54 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:34:54.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:34:55 compute-0 ceph-mon[74654]: pgmap v674: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:34:55 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:34:55 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:34:55 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:34:55.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:34:56 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v675: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:34:56 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:34:56 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:34:56 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:34:56.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:34:57 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:34:57 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:34:57 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:34:57.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:34:58 compute-0 ceph-mon[74654]: pgmap v675: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:34:58 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v676: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:34:58 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:34:58 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:34:58 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:34:58.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:34:59 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:34:59 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:34:59 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:34:59 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:34:59.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:35:00 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v677: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:35:00 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:35:00 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:35:00 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:35:00.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:35:01 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:35:01 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:35:01 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:35:01.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:35:02 compute-0 ceph-mon[74654]: pgmap v676: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:35:02 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v678: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:35:02 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:35:02 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:35:02 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:35:02.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:35:03 compute-0 ceph-mon[74654]: pgmap v677: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:35:03 compute-0 ceph-mon[74654]: pgmap v678: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:35:03 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:35:03 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:35:03 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:35:03.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:35:04 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:35:04 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v679: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:35:04 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:35:04 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:35:04 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:35:04.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:35:05 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:35:05 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:35:05 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:35:05.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:35:06 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v680: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:35:06 compute-0 ceph-mon[74654]: pgmap v679: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:35:06 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:35:06 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:35:06 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:35:06.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:35:07 compute-0 sudo[183606]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:35:07 compute-0 sudo[183606]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:35:07 compute-0 sudo[183606]: pam_unix(sudo:session): session closed for user root
Nov 29 06:35:07 compute-0 sudo[183631]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:35:07 compute-0 sudo[183631]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:35:07 compute-0 sudo[183631]: pam_unix(sudo:session): session closed for user root
Nov 29 06:35:07 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:35:07 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:35:07 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:35:07.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:35:07 compute-0 sshd-session[183601]: Invalid user bitwarden from 49.247.35.31 port 13780
Nov 29 06:35:07 compute-0 sshd-session[183601]: Received disconnect from 49.247.35.31 port 13780:11: Bye Bye [preauth]
Nov 29 06:35:07 compute-0 sshd-session[183601]: Disconnected from invalid user bitwarden 49.247.35.31 port 13780 [preauth]
Nov 29 06:35:07 compute-0 sshd-session[183603]: Invalid user laravel from 104.208.108.166 port 7764
Nov 29 06:35:08 compute-0 sshd-session[183603]: Received disconnect from 104.208.108.166 port 7764:11: Bye Bye [preauth]
Nov 29 06:35:08 compute-0 sshd-session[183603]: Disconnected from invalid user laravel 104.208.108.166 port 7764 [preauth]
Nov 29 06:35:08 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v681: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:35:08 compute-0 sshd-session[183656]: Invalid user csgoserver from 197.13.24.157 port 53490
Nov 29 06:35:08 compute-0 sshd-session[183656]: Received disconnect from 197.13.24.157 port 53490:11: Bye Bye [preauth]
Nov 29 06:35:08 compute-0 sshd-session[183656]: Disconnected from invalid user csgoserver 197.13.24.157 port 53490 [preauth]
Nov 29 06:35:08 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:35:08 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:35:08 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:35:08.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:35:09 compute-0 ceph-mon[74654]: pgmap v680: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:35:09 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:35:09 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:35:09 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:35:09.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:35:09 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:35:10 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v682: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:35:10 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:35:10 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:35:10 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:35:10.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:35:11 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:35:11 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:35:11 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:35:11.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:35:11 compute-0 ceph-mon[74654]: pgmap v681: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:35:11 compute-0 sshd-session[183667]: Invalid user root1 from 103.147.159.91 port 53816
Nov 29 06:35:12 compute-0 sshd-session[183667]: Received disconnect from 103.147.159.91 port 53816:11: Bye Bye [preauth]
Nov 29 06:35:12 compute-0 sshd-session[183667]: Disconnected from invalid user root1 103.147.159.91 port 53816 [preauth]
Nov 29 06:35:12 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v683: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:35:12 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:35:12 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:35:12 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:35:12.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:35:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 06:35:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:35:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 06:35:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:35:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:35:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:35:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:35:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:35:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:35:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:35:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:35:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:35:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 06:35:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:35:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:35:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:35:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Nov 29 06:35:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:35:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 06:35:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:35:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:35:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:35:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 06:35:13 compute-0 ceph-mon[74654]: pgmap v682: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:35:13 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:35:13 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:35:13 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:35:13.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:35:14 compute-0 sshd-session[183671]: Received disconnect from 162.214.92.14 port 58148:11: Bye Bye [preauth]
Nov 29 06:35:14 compute-0 sshd-session[183671]: Disconnected from authenticating user root 162.214.92.14 port 58148 [preauth]
Nov 29 06:35:14 compute-0 ceph-mon[74654]: pgmap v683: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:35:14 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v684: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:35:14 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:35:14 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:35:14 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:35:14 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:35:14.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:35:15 compute-0 ceph-mon[74654]: pgmap v684: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:35:15 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:35:15 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:35:15 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:35:15.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:35:16 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v685: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:35:16 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:35:16 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:35:16 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:35:16.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:35:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:35:17.219 157767 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 06:35:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:35:17.220 157767 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 06:35:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:35:17.220 157767 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 06:35:17 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:35:17 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:35:17 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:35:17.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:35:18 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v686: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:35:18 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:35:18 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:35:18 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:35:18.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:35:19 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:35:19 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:35:19 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:35:19.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:35:20 compute-0 podman[183684]: 2025-11-29 06:35:20.217472489 +0000 UTC m=+0.856818457 container health_status 81ea2bcb89266a0110a379c2083d8cc042460d4a35c8ed3bf349dd1083925000 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 29 06:35:20 compute-0 podman[183685]: 2025-11-29 06:35:20.294264771 +0000 UTC m=+0.938025226 container health_status b3f42e9a710907b47913576d27471d163da731262c1464357cff24681ce600c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 06:35:20 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v687: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:35:20 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:35:20 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:35:20 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:35:20.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:35:21 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:35:21 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:35:21 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:35:21.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:35:22 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v688: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:35:22 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:35:22 compute-0 ceph-mon[74654]: pgmap v685: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:35:22 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:35:22 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:35:22 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:35:22.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:35:23 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:35:23 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:35:23 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:35:23.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:35:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:35:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:35:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:35:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:35:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:35:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:35:24 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v689: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:35:24 compute-0 kernel: SELinux:  Converting 2772 SID table entries...
Nov 29 06:35:24 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Nov 29 06:35:24 compute-0 kernel: SELinux:  policy capability open_perms=1
Nov 29 06:35:24 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Nov 29 06:35:24 compute-0 kernel: SELinux:  policy capability always_check_network=0
Nov 29 06:35:24 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 29 06:35:24 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 29 06:35:24 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 29 06:35:24 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:35:24 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:35:24 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:35:24.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:35:25 compute-0 ceph-mon[74654]: pgmap v686: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:35:25 compute-0 ceph-mon[74654]: pgmap v687: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:35:25 compute-0 ceph-mon[74654]: pgmap v688: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:35:25 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:35:25 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:35:25 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:35:25.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:35:26 compute-0 ceph-mon[74654]: pgmap v689: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:35:26 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v690: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:35:26 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:35:26 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:35:26 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:35:26.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:35:27 compute-0 sudo[183739]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:35:27 compute-0 dbus-broker-launch[778]: avc:  op=load_policy lsm=selinux seqno=14 res=1
Nov 29 06:35:27 compute-0 sudo[183739]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:35:27 compute-0 sudo[183739]: pam_unix(sudo:session): session closed for user root
Nov 29 06:35:27 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:35:27 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:35:27 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:35:27.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:35:27 compute-0 sudo[183764]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:35:27 compute-0 sudo[183764]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:35:27 compute-0 sudo[183764]: pam_unix(sudo:session): session closed for user root
Nov 29 06:35:27 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:35:27 compute-0 ceph-mon[74654]: pgmap v690: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:35:27 compute-0 groupadd[183786]: group added to /etc/group: name=dnsmasq, GID=991
Nov 29 06:35:28 compute-0 groupadd[183786]: group added to /etc/gshadow: name=dnsmasq
Nov 29 06:35:28 compute-0 groupadd[183786]: new group: name=dnsmasq, GID=991
Nov 29 06:35:28 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v691: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:35:28 compute-0 useradd[183798]: new user: name=dnsmasq, UID=991, GID=991, home=/var/lib/dnsmasq, shell=/usr/sbin/nologin, from=none
Nov 29 06:35:28 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:35:28 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:35:28 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:35:28.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:35:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 06:35:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 06:35:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 06:35:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 06:35:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 06:35:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 06:35:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 06:35:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 06:35:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 06:35:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 06:35:29 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:35:29 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:35:29 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:35:29.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:35:30 compute-0 sshd-session[183800]: Invalid user git from 176.109.67.96 port 41706
Nov 29 06:35:30 compute-0 dbus-broker-launch[771]: Noticed file-system modification, trigger reload.
Nov 29 06:35:30 compute-0 dbus-broker-launch[771]: Noticed file-system modification, trigger reload.
Nov 29 06:35:30 compute-0 sshd-session[183800]: Received disconnect from 176.109.67.96 port 41706:11: Bye Bye [preauth]
Nov 29 06:35:30 compute-0 sshd-session[183800]: Disconnected from invalid user git 176.109.67.96 port 41706 [preauth]
Nov 29 06:35:30 compute-0 ceph-mon[74654]: pgmap v691: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:35:30 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v692: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:35:30 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:35:30 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:35:30 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:35:30.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:35:31 compute-0 ceph-mon[74654]: pgmap v692: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:35:31 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:35:31 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:35:31 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:35:31.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:35:32 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v693: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:35:32 compute-0 groupadd[183815]: group added to /etc/group: name=clevis, GID=990
Nov 29 06:35:32 compute-0 groupadd[183815]: group added to /etc/gshadow: name=clevis
Nov 29 06:35:32 compute-0 groupadd[183815]: new group: name=clevis, GID=990
Nov 29 06:35:32 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:35:32 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:35:32 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:35:32 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:35:32.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:35:33 compute-0 useradd[183822]: new user: name=clevis, UID=990, GID=990, home=/var/cache/clevis, shell=/usr/sbin/nologin, from=none
Nov 29 06:35:33 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:35:33 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:35:33 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:35:33.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:35:34 compute-0 usermod[183833]: add 'clevis' to group 'tss'
Nov 29 06:35:34 compute-0 usermod[183833]: add 'clevis' to shadow group 'tss'
Nov 29 06:35:34 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v694: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:35:34 compute-0 ceph-mon[74654]: pgmap v693: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:35:34 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:35:34 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:35:34 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:35:34.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:35:35 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:35:35 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:35:35 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:35:35.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:35:35 compute-0 ceph-mon[74654]: pgmap v694: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:35:36 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v695: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:35:36 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:35:36 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:35:36 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:35:36.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:35:37 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:35:37 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:35:37 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:35:37.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:35:37 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:35:38 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v696: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:35:38 compute-0 ceph-mon[74654]: pgmap v695: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:35:38 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:35:38 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:35:38 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:35:38.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:35:39 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:35:39 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:35:39 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:35:39.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:35:40 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v697: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:35:40 compute-0 sshd-session[183857]: Invalid user ubuntu from 118.193.39.127 port 45230
Nov 29 06:35:40 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:35:40 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:35:40 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:35:40.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:35:41 compute-0 sshd-session[183857]: Received disconnect from 118.193.39.127 port 45230:11: Bye Bye [preauth]
Nov 29 06:35:41 compute-0 sshd-session[183857]: Disconnected from invalid user ubuntu 118.193.39.127 port 45230 [preauth]
Nov 29 06:35:41 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:35:41 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:35:41 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:35:41.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:35:42 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v698: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:35:42 compute-0 polkitd[43682]: Reloading rules
Nov 29 06:35:42 compute-0 polkitd[43682]: Collecting garbage unconditionally...
Nov 29 06:35:42 compute-0 polkitd[43682]: Loading rules from directory /etc/polkit-1/rules.d
Nov 29 06:35:42 compute-0 polkitd[43682]: Loading rules from directory /usr/share/polkit-1/rules.d
Nov 29 06:35:42 compute-0 polkitd[43682]: Finished loading, compiling and executing 3 rules
Nov 29 06:35:42 compute-0 polkitd[43682]: Reloading rules
Nov 29 06:35:42 compute-0 polkitd[43682]: Collecting garbage unconditionally...
Nov 29 06:35:42 compute-0 polkitd[43682]: Loading rules from directory /etc/polkit-1/rules.d
Nov 29 06:35:42 compute-0 polkitd[43682]: Loading rules from directory /usr/share/polkit-1/rules.d
Nov 29 06:35:42 compute-0 polkitd[43682]: Finished loading, compiling and executing 3 rules
Nov 29 06:35:42 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:35:42 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:35:42 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:35:42 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:35:42.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:35:43 compute-0 sudo[183865]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:35:43 compute-0 sudo[183865]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:35:43 compute-0 sudo[183865]: pam_unix(sudo:session): session closed for user root
Nov 29 06:35:43 compute-0 sudo[183890]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:35:43 compute-0 sudo[183890]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:35:43 compute-0 sudo[183890]: pam_unix(sudo:session): session closed for user root
Nov 29 06:35:43 compute-0 sudo[183915]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:35:43 compute-0 sudo[183915]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:35:43 compute-0 sudo[183915]: pam_unix(sudo:session): session closed for user root
Nov 29 06:35:43 compute-0 sudo[183940]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 06:35:43 compute-0 sudo[183940]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:35:43 compute-0 ceph-mon[74654]: pgmap v696: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:35:43 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:35:43 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:35:43 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:35:43.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:35:44 compute-0 sudo[183940]: pam_unix(sudo:session): session closed for user root
Nov 29 06:35:44 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v699: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:35:44 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 06:35:44 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:35:44 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 06:35:44 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 06:35:44 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 06:35:44 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:35:44 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:35:44 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:35:44.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:35:45 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:35:45 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:35:45 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:35:45.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:35:46 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v700: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:35:46 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:35:46 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:35:46 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:35:46.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:35:47 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:35:47 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:35:47 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:35:47.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:35:47 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:35:47 compute-0 sudo[184051]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:35:47 compute-0 sudo[184051]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:35:47 compute-0 sudo[184051]: pam_unix(sudo:session): session closed for user root
Nov 29 06:35:47 compute-0 sudo[184076]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:35:47 compute-0 sudo[184076]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:35:47 compute-0 sudo[184076]: pam_unix(sudo:session): session closed for user root
Nov 29 06:35:48 compute-0 ceph-mon[74654]: pgmap v697: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:35:48 compute-0 ceph-mon[74654]: pgmap v698: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:35:48 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:35:48 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v701: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:35:48 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:35:48 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:35:48 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:35:48.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:35:49 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:35:49 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:35:49 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:35:49.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:35:50 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v702: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:35:50 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:35:50 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:35:50 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:35:50.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:35:50 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:35:51 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev 12dfc784-8693-44f4-957e-6b10f2652c9e does not exist
Nov 29 06:35:51 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev 6040f888-d17f-465a-a923-562bc5d2a68d does not exist
Nov 29 06:35:51 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev 9bcf0475-7f10-465a-b936-9ea7241fe5cd does not exist
Nov 29 06:35:51 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 06:35:51 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 06:35:51 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 06:35:51 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 06:35:51 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 06:35:51 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:35:51 compute-0 sudo[184133]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:35:51 compute-0 sudo[184133]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:35:51 compute-0 podman[184117]: 2025-11-29 06:35:51.146570993 +0000 UTC m=+0.087242213 container health_status 81ea2bcb89266a0110a379c2083d8cc042460d4a35c8ed3bf349dd1083925000 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 06:35:51 compute-0 sudo[184133]: pam_unix(sudo:session): session closed for user root
Nov 29 06:35:51 compute-0 podman[184120]: 2025-11-29 06:35:51.187063639 +0000 UTC m=+0.128983635 container health_status b3f42e9a710907b47913576d27471d163da731262c1464357cff24681ce600c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 29 06:35:51 compute-0 sudo[184184]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:35:51 compute-0 sudo[184184]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:35:51 compute-0 sudo[184184]: pam_unix(sudo:session): session closed for user root
Nov 29 06:35:51 compute-0 sudo[184213]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:35:51 compute-0 sudo[184213]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:35:51 compute-0 sudo[184213]: pam_unix(sudo:session): session closed for user root
Nov 29 06:35:51 compute-0 sudo[184238]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Nov 29 06:35:51 compute-0 sudo[184238]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:35:51 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:35:51 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:35:51 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:35:51.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:35:51 compute-0 podman[184304]: 2025-11-29 06:35:51.665135078 +0000 UTC m=+0.028524813 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:35:51 compute-0 ceph-mon[74654]: pgmap v699: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:35:51 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 06:35:51 compute-0 ceph-mon[74654]: pgmap v700: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:35:51 compute-0 ceph-mon[74654]: pgmap v701: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:35:51 compute-0 ceph-mon[74654]: pgmap v702: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:35:51 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:35:51 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 06:35:51 compute-0 podman[184304]: 2025-11-29 06:35:51.975685891 +0000 UTC m=+0.339075526 container create 52ea9d790bab989811166b350b38f0ec147e72496e5da26ec8e3b7257200461b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_tesla, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 29 06:35:52 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v703: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:35:52 compute-0 systemd[1]: Started libpod-conmon-52ea9d790bab989811166b350b38f0ec147e72496e5da26ec8e3b7257200461b.scope.
Nov 29 06:35:52 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:35:52 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:35:52 compute-0 podman[184304]: 2025-11-29 06:35:52.834854625 +0000 UTC m=+1.198244280 container init 52ea9d790bab989811166b350b38f0ec147e72496e5da26ec8e3b7257200461b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_tesla, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 06:35:52 compute-0 podman[184304]: 2025-11-29 06:35:52.84196949 +0000 UTC m=+1.205359125 container start 52ea9d790bab989811166b350b38f0ec147e72496e5da26ec8e3b7257200461b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_tesla, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0)
Nov 29 06:35:52 compute-0 cranky_tesla[184372]: 167 167
Nov 29 06:35:52 compute-0 systemd[1]: libpod-52ea9d790bab989811166b350b38f0ec147e72496e5da26ec8e3b7257200461b.scope: Deactivated successfully.
Nov 29 06:35:52 compute-0 podman[184304]: 2025-11-29 06:35:52.854598104 +0000 UTC m=+1.217987759 container attach 52ea9d790bab989811166b350b38f0ec147e72496e5da26ec8e3b7257200461b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_tesla, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 06:35:52 compute-0 podman[184304]: 2025-11-29 06:35:52.855415868 +0000 UTC m=+1.218805533 container died 52ea9d790bab989811166b350b38f0ec147e72496e5da26ec8e3b7257200461b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_tesla, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 29 06:35:52 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:35:52 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:35:52 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:35:52.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:35:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-59a088a850afa710d6692b90becdfa1b4776df8c1b010a127e90057b539c5387-merged.mount: Deactivated successfully.
Nov 29 06:35:53 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 06:35:53 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:35:53 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:35:53 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:35:53 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:35:53.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:35:54 compute-0 ceph-mgr[74948]: [balancer INFO root] Optimize plan auto_2025-11-29_06:35:54
Nov 29 06:35:54 compute-0 ceph-mgr[74948]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 06:35:54 compute-0 ceph-mgr[74948]: [balancer INFO root] do_upmap
Nov 29 06:35:54 compute-0 ceph-mgr[74948]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'images', '.mgr', 'volumes', '.rgw.root', 'vms', 'default.rgw.control', 'default.rgw.log', 'backups']
Nov 29 06:35:54 compute-0 ceph-mgr[74948]: [balancer INFO root] prepared 0/10 changes
Nov 29 06:35:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:35:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:35:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:35:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:35:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:35:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:35:54 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v704: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:35:54 compute-0 sshd-session[184405]: Invalid user mysql from 34.92.81.41 port 56440
Nov 29 06:35:54 compute-0 sshd-session[184405]: Received disconnect from 34.92.81.41 port 56440:11: Bye Bye [preauth]
Nov 29 06:35:54 compute-0 sshd-session[184405]: Disconnected from invalid user mysql 34.92.81.41 port 56440 [preauth]
Nov 29 06:35:54 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:35:54 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:35:54 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:35:54.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:35:55 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:35:55 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:35:55 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:35:55.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:35:56 compute-0 ceph-mon[74654]: pgmap v703: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:35:56 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v705: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:35:56 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:35:56 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:35:56 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:35:56.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:35:57 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:35:57 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:35:57 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:35:57.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:35:58 compute-0 podman[184304]: 2025-11-29 06:35:58.26672655 +0000 UTC m=+6.630116235 container remove 52ea9d790bab989811166b350b38f0ec147e72496e5da26ec8e3b7257200461b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_tesla, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 06:35:58 compute-0 systemd[1]: libpod-conmon-52ea9d790bab989811166b350b38f0ec147e72496e5da26ec8e3b7257200461b.scope: Deactivated successfully.
Nov 29 06:35:58 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v706: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:35:58 compute-0 podman[184436]: 2025-11-29 06:35:58.435240783 +0000 UTC m=+0.026227556 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:35:58 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:35:58 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:35:58 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:35:58.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:35:59 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:35:59 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:35:59 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:35:59.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:36:00 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v707: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:36:00 compute-0 ceph-mds[94810]: mds.beacon.cephfs.compute-0.jzycnf missed beacon ack from the monitors
Nov 29 06:36:01 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:36:01 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:36:01 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:36:00.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:36:01 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:36:01 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:36:01 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:36:01.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:36:02 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:36:02 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v708: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:36:03 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:36:03 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:36:03 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:36:03.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:36:03 compute-0 podman[184436]: 2025-11-29 06:36:03.0941922 +0000 UTC m=+4.685178933 container create d61b3e80e2750c56023bc1c63fa938436e49d21efd02e52deeedc7cd47c93fa9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_babbage, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 29 06:36:03 compute-0 sshd-session[184452]: Invalid user exx from 31.6.212.12 port 43850
Nov 29 06:36:03 compute-0 sshd-session[184452]: Received disconnect from 31.6.212.12 port 43850:11: Bye Bye [preauth]
Nov 29 06:36:03 compute-0 sshd-session[184452]: Disconnected from invalid user exx 31.6.212.12 port 43850 [preauth]
Nov 29 06:36:03 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:36:03 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:36:03 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:36:03.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:36:04 compute-0 systemd[1]: Started libpod-conmon-d61b3e80e2750c56023bc1c63fa938436e49d21efd02e52deeedc7cd47c93fa9.scope.
Nov 29 06:36:04 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:36:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcc5024b121fb257fd3a29ca8f358ad2ccce7316febdde0e0b8cd2097326af49/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 06:36:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcc5024b121fb257fd3a29ca8f358ad2ccce7316febdde0e0b8cd2097326af49/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:36:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcc5024b121fb257fd3a29ca8f358ad2ccce7316febdde0e0b8cd2097326af49/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:36:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcc5024b121fb257fd3a29ca8f358ad2ccce7316febdde0e0b8cd2097326af49/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 06:36:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcc5024b121fb257fd3a29ca8f358ad2ccce7316febdde0e0b8cd2097326af49/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 06:36:04 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v709: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:36:04 compute-0 podman[184436]: 2025-11-29 06:36:04.760394048 +0000 UTC m=+6.351380811 container init d61b3e80e2750c56023bc1c63fa938436e49d21efd02e52deeedc7cd47c93fa9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_babbage, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 29 06:36:04 compute-0 podman[184436]: 2025-11-29 06:36:04.772384403 +0000 UTC m=+6.363371106 container start d61b3e80e2750c56023bc1c63fa938436e49d21efd02e52deeedc7cd47c93fa9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_babbage, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True)
Nov 29 06:36:04 compute-0 ceph-mon[74654]: pgmap v704: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:36:04 compute-0 ceph-mon[74654]: pgmap v705: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:36:04 compute-0 ceph-mon[74654]: pgmap v706: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:36:04 compute-0 ceph-mon[74654]: pgmap v707: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:36:04 compute-0 groupadd[184456]: group added to /etc/group: name=ceph, GID=167
Nov 29 06:36:05 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:36:05 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:36:05 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:36:05.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:36:05 compute-0 upbeat_babbage[184459]: --> passed data devices: 0 physical, 1 LVM
Nov 29 06:36:05 compute-0 upbeat_babbage[184459]: --> relative data size: 1.0
Nov 29 06:36:05 compute-0 upbeat_babbage[184459]: --> All data devices are unavailable
Nov 29 06:36:05 compute-0 systemd[1]: libpod-d61b3e80e2750c56023bc1c63fa938436e49d21efd02e52deeedc7cd47c93fa9.scope: Deactivated successfully.
Nov 29 06:36:05 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:36:05 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:36:05 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:36:05.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:36:06 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v710: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:36:06 compute-0 podman[184436]: 2025-11-29 06:36:06.707016959 +0000 UTC m=+8.298003652 container attach d61b3e80e2750c56023bc1c63fa938436e49d21efd02e52deeedc7cd47c93fa9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_babbage, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 29 06:36:06 compute-0 podman[184436]: 2025-11-29 06:36:06.710209391 +0000 UTC m=+8.301196124 container died d61b3e80e2750c56023bc1c63fa938436e49d21efd02e52deeedc7cd47c93fa9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_babbage, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 06:36:07 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:36:07 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:36:07 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:36:07.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:36:07 compute-0 ceph-mon[74654]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 06:36:07 compute-0 ceph-mon[74654]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Cumulative writes: 2778 writes, 12K keys, 2778 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 2778 writes, 2778 syncs, 1.00 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1075 writes, 4445 keys, 1075 commit groups, 1.0 writes per commit group, ingest: 7.66 MB, 0.01 MB/s
                                           Interval WAL: 1075 writes, 1075 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     13.0      1.01              0.03         4    0.253       0      0       0.0       0.0
                                             L6      1/0    9.10 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.1     13.7     12.0      2.32              0.09         3    0.774     12K   1290       0.0       0.0
                                            Sum      1/0    9.10 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   3.1      9.6     12.3      3.33              0.12         7    0.476     12K   1290       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   3.1      9.6     12.3      3.33              0.12         6    0.555     12K   1290       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0     13.7     12.0      2.32              0.09         3    0.774     12K   1290       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     13.0      1.01              0.03         3    0.335       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     12.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.013, interval 0.013
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.04 GB write, 0.03 MB/s write, 0.03 GB read, 0.03 MB/s read, 3.3 seconds
                                           Interval compaction: 0.04 GB write, 0.07 MB/s write, 0.03 GB read, 0.05 MB/s read, 3.3 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e1a58311f0#2 capacity: 304.00 MB usage: 1.34 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(69,1.20 MB,0.395885%) FilterBlock(8,44.11 KB,0.0141696%) IndexBlock(8,98.33 KB,0.0315867%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 29 06:36:07 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:36:07 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:36:07 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:36:07.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:36:07 compute-0 sudo[184487]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:36:07 compute-0 sudo[184487]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:36:07 compute-0 sudo[184487]: pam_unix(sudo:session): session closed for user root
Nov 29 06:36:08 compute-0 sudo[184512]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:36:08 compute-0 sudo[184512]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:36:08 compute-0 sudo[184512]: pam_unix(sudo:session): session closed for user root
Nov 29 06:36:08 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v711: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:36:08 compute-0 ceph-mds[94810]: mds.beacon.cephfs.compute-0.jzycnf missed beacon ack from the monitors
Nov 29 06:36:09 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:36:09 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:36:09 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:36:09.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:36:09 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:36:09 compute-0 groupadd[184456]: group added to /etc/gshadow: name=ceph
Nov 29 06:36:09 compute-0 groupadd[184456]: new group: name=ceph, GID=167
Nov 29 06:36:09 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:36:09 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:36:09 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:36:09.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:36:10 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v712: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:36:11 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:36:11 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:36:11 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:36:11.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:36:11 compute-0 sshd-session[184537]: Received disconnect from 58.210.98.130 port 62354:11: Bye Bye [preauth]
Nov 29 06:36:11 compute-0 sshd-session[184537]: Disconnected from authenticating user root 58.210.98.130 port 62354 [preauth]
Nov 29 06:36:11 compute-0 ceph-mon[74654]: pgmap v708: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:36:11 compute-0 ceph-mon[74654]: pgmap v709: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:36:11 compute-0 useradd[184544]: new user: name=ceph, UID=167, GID=167, home=/var/lib/ceph, shell=/sbin/nologin, from=none
Nov 29 06:36:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-bcc5024b121fb257fd3a29ca8f358ad2ccce7316febdde0e0b8cd2097326af49-merged.mount: Deactivated successfully.
Nov 29 06:36:11 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:36:11 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:36:11 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:36:11.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:36:12 compute-0 podman[184436]: 2025-11-29 06:36:12.189839353 +0000 UTC m=+13.780826086 container remove d61b3e80e2750c56023bc1c63fa938436e49d21efd02e52deeedc7cd47c93fa9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_babbage, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 06:36:12 compute-0 systemd[1]: libpod-conmon-d61b3e80e2750c56023bc1c63fa938436e49d21efd02e52deeedc7cd47c93fa9.scope: Deactivated successfully.
Nov 29 06:36:12 compute-0 sudo[184238]: pam_unix(sudo:session): session closed for user root
Nov 29 06:36:12 compute-0 sudo[184557]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:36:12 compute-0 sudo[184557]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:36:12 compute-0 sudo[184557]: pam_unix(sudo:session): session closed for user root
Nov 29 06:36:12 compute-0 sudo[184583]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:36:12 compute-0 sudo[184583]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:36:12 compute-0 sudo[184583]: pam_unix(sudo:session): session closed for user root
Nov 29 06:36:12 compute-0 sudo[184608]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:36:12 compute-0 sudo[184608]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:36:12 compute-0 sudo[184608]: pam_unix(sudo:session): session closed for user root
Nov 29 06:36:12 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v713: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:36:12 compute-0 sudo[184633]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -- lvm list --format json
Nov 29 06:36:12 compute-0 sudo[184633]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:36:12 compute-0 podman[184697]: 2025-11-29 06:36:12.790277855 +0000 UTC m=+0.023580280 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:36:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 06:36:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:36:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 06:36:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:36:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:36:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:36:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:36:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:36:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:36:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:36:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:36:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:36:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 06:36:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:36:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:36:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:36:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Nov 29 06:36:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:36:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 06:36:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:36:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:36:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:36:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 06:36:13 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:36:13 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:36:13 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:36:13.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:36:13 compute-0 podman[184697]: 2025-11-29 06:36:13.575183821 +0000 UTC m=+0.808486276 container create 0064365d7c36114eef7806509a69e4faa7756a9bfabc82f7fd753ca5a2b166eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_robinson, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 06:36:13 compute-0 ceph-mon[74654]: pgmap v710: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:36:13 compute-0 ceph-mon[74654]: pgmap v711: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:36:13 compute-0 ceph-mon[74654]: pgmap v712: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:36:13 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:36:13 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:36:13 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:36:13.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:36:13 compute-0 systemd[1]: Started libpod-conmon-0064365d7c36114eef7806509a69e4faa7756a9bfabc82f7fd753ca5a2b166eb.scope.
Nov 29 06:36:13 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:36:14 compute-0 podman[184697]: 2025-11-29 06:36:14.456435481 +0000 UTC m=+1.689737986 container init 0064365d7c36114eef7806509a69e4faa7756a9bfabc82f7fd753ca5a2b166eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_robinson, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 29 06:36:14 compute-0 podman[184697]: 2025-11-29 06:36:14.469821977 +0000 UTC m=+1.703124402 container start 0064365d7c36114eef7806509a69e4faa7756a9bfabc82f7fd753ca5a2b166eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_robinson, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 06:36:14 compute-0 systemd[1]: libpod-0064365d7c36114eef7806509a69e4faa7756a9bfabc82f7fd753ca5a2b166eb.scope: Deactivated successfully.
Nov 29 06:36:14 compute-0 silly_robinson[184714]: 167 167
Nov 29 06:36:14 compute-0 conmon[184714]: conmon 0064365d7c36114eef78 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0064365d7c36114eef7806509a69e4faa7756a9bfabc82f7fd753ca5a2b166eb.scope/container/memory.events
Nov 29 06:36:14 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v714: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:36:14 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:36:14 compute-0 podman[184697]: 2025-11-29 06:36:14.610423476 +0000 UTC m=+1.843725921 container attach 0064365d7c36114eef7806509a69e4faa7756a9bfabc82f7fd753ca5a2b166eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_robinson, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 06:36:14 compute-0 podman[184697]: 2025-11-29 06:36:14.611718723 +0000 UTC m=+1.845021138 container died 0064365d7c36114eef7806509a69e4faa7756a9bfabc82f7fd753ca5a2b166eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_robinson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 29 06:36:15 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:36:15 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:36:15 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:36:15.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:36:15 compute-0 ceph-mon[74654]: pgmap v713: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:36:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-1c829df6f89bbee31f04e7d99237008dc55c347f34ca4ae896fd8665e458fd49-merged.mount: Deactivated successfully.
Nov 29 06:36:15 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:36:15 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:36:15 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:36:15.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:36:16 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v715: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:36:17 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:36:17 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:36:17 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:36:17.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:36:17 compute-0 podman[184697]: 2025-11-29 06:36:17.203001631 +0000 UTC m=+4.436304076 container remove 0064365d7c36114eef7806509a69e4faa7756a9bfabc82f7fd753ca5a2b166eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_robinson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True)
Nov 29 06:36:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:36:17.220 157767 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 06:36:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:36:17.221 157767 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 06:36:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:36:17.221 157767 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 06:36:17 compute-0 systemd[1]: Stopping OpenSSH server daemon...
Nov 29 06:36:17 compute-0 sshd[1008]: Received signal 15; terminating.
Nov 29 06:36:17 compute-0 systemd[1]: sshd.service: Deactivated successfully.
Nov 29 06:36:17 compute-0 systemd[1]: Stopped OpenSSH server daemon.
Nov 29 06:36:17 compute-0 systemd[1]: sshd.service: Consumed 13.616s CPU time, read 32.0K from disk, written 368.0K to disk.
Nov 29 06:36:17 compute-0 systemd[1]: Stopped target sshd-keygen.target.
Nov 29 06:36:17 compute-0 systemd[1]: Stopping sshd-keygen.target...
Nov 29 06:36:17 compute-0 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 29 06:36:17 compute-0 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 29 06:36:17 compute-0 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 29 06:36:17 compute-0 systemd[1]: Reached target sshd-keygen.target.
Nov 29 06:36:17 compute-0 systemd[1]: Starting OpenSSH server daemon...
Nov 29 06:36:17 compute-0 systemd[1]: libpod-conmon-0064365d7c36114eef7806509a69e4faa7756a9bfabc82f7fd753ca5a2b166eb.scope: Deactivated successfully.
Nov 29 06:36:17 compute-0 sshd[185364]: Server listening on 0.0.0.0 port 22.
Nov 29 06:36:17 compute-0 sshd[185364]: Server listening on :: port 22.
Nov 29 06:36:17 compute-0 systemd[1]: Started OpenSSH server daemon.
Nov 29 06:36:17 compute-0 podman[185380]: 2025-11-29 06:36:17.435505268 +0000 UTC m=+0.032787176 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:36:17 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:36:17 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:36:17 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:36:17.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:36:18 compute-0 podman[185380]: 2025-11-29 06:36:18.195536536 +0000 UTC m=+0.792818364 container create 007f58c3507647dce4e1047107cdf8e58e5c7cb37a8698534430f80656c7d020 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_carver, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 06:36:18 compute-0 ceph-mon[74654]: pgmap v714: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:36:18 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v716: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:36:18 compute-0 systemd[1]: Started libpod-conmon-007f58c3507647dce4e1047107cdf8e58e5c7cb37a8698534430f80656c7d020.scope.
Nov 29 06:36:18 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:36:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b91f4c9a7f3fe2b2e02bb0f03bee47ec61febaea53a5267d4e30a0478debfa50/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 06:36:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b91f4c9a7f3fe2b2e02bb0f03bee47ec61febaea53a5267d4e30a0478debfa50/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:36:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b91f4c9a7f3fe2b2e02bb0f03bee47ec61febaea53a5267d4e30a0478debfa50/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:36:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b91f4c9a7f3fe2b2e02bb0f03bee47ec61febaea53a5267d4e30a0478debfa50/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 06:36:18 compute-0 podman[185380]: 2025-11-29 06:36:18.768332563 +0000 UTC m=+1.365614391 container init 007f58c3507647dce4e1047107cdf8e58e5c7cb37a8698534430f80656c7d020 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_carver, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 06:36:18 compute-0 podman[185380]: 2025-11-29 06:36:18.7772744 +0000 UTC m=+1.374556218 container start 007f58c3507647dce4e1047107cdf8e58e5c7cb37a8698534430f80656c7d020 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_carver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 29 06:36:19 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:36:19 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:36:19 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:36:19.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:36:19 compute-0 podman[185380]: 2025-11-29 06:36:19.101058535 +0000 UTC m=+1.698340403 container attach 007f58c3507647dce4e1047107cdf8e58e5c7cb37a8698534430f80656c7d020 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_carver, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 29 06:36:19 compute-0 objective_carver[185510]: {
Nov 29 06:36:19 compute-0 objective_carver[185510]:     "1": [
Nov 29 06:36:19 compute-0 objective_carver[185510]:         {
Nov 29 06:36:19 compute-0 objective_carver[185510]:             "devices": [
Nov 29 06:36:19 compute-0 objective_carver[185510]:                 "/dev/loop3"
Nov 29 06:36:19 compute-0 objective_carver[185510]:             ],
Nov 29 06:36:19 compute-0 objective_carver[185510]:             "lv_name": "ceph_lv0",
Nov 29 06:36:19 compute-0 objective_carver[185510]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 06:36:19 compute-0 objective_carver[185510]:             "lv_size": "7511998464",
Nov 29 06:36:19 compute-0 objective_carver[185510]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=336ec58c-893b-528f-a0c1-6ed1196bc047,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=91f280f1-e534-4adc-bf70-98711580c2dd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 06:36:19 compute-0 objective_carver[185510]:             "lv_uuid": "G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP",
Nov 29 06:36:19 compute-0 objective_carver[185510]:             "name": "ceph_lv0",
Nov 29 06:36:19 compute-0 objective_carver[185510]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 06:36:19 compute-0 objective_carver[185510]:             "tags": {
Nov 29 06:36:19 compute-0 objective_carver[185510]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 06:36:19 compute-0 objective_carver[185510]:                 "ceph.block_uuid": "G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP",
Nov 29 06:36:19 compute-0 objective_carver[185510]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 06:36:19 compute-0 objective_carver[185510]:                 "ceph.cluster_fsid": "336ec58c-893b-528f-a0c1-6ed1196bc047",
Nov 29 06:36:19 compute-0 objective_carver[185510]:                 "ceph.cluster_name": "ceph",
Nov 29 06:36:19 compute-0 objective_carver[185510]:                 "ceph.crush_device_class": "",
Nov 29 06:36:19 compute-0 objective_carver[185510]:                 "ceph.encrypted": "0",
Nov 29 06:36:19 compute-0 objective_carver[185510]:                 "ceph.osd_fsid": "91f280f1-e534-4adc-bf70-98711580c2dd",
Nov 29 06:36:19 compute-0 objective_carver[185510]:                 "ceph.osd_id": "1",
Nov 29 06:36:19 compute-0 objective_carver[185510]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 06:36:19 compute-0 objective_carver[185510]:                 "ceph.type": "block",
Nov 29 06:36:19 compute-0 objective_carver[185510]:                 "ceph.vdo": "0"
Nov 29 06:36:19 compute-0 objective_carver[185510]:             },
Nov 29 06:36:19 compute-0 objective_carver[185510]:             "type": "block",
Nov 29 06:36:19 compute-0 objective_carver[185510]:             "vg_name": "ceph_vg0"
Nov 29 06:36:19 compute-0 objective_carver[185510]:         }
Nov 29 06:36:19 compute-0 objective_carver[185510]:     ]
Nov 29 06:36:19 compute-0 objective_carver[185510]: }
Nov 29 06:36:19 compute-0 systemd[1]: libpod-007f58c3507647dce4e1047107cdf8e58e5c7cb37a8698534430f80656c7d020.scope: Deactivated successfully.
Nov 29 06:36:19 compute-0 podman[185380]: 2025-11-29 06:36:19.632332875 +0000 UTC m=+2.229614723 container died 007f58c3507647dce4e1047107cdf8e58e5c7cb37a8698534430f80656c7d020 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_carver, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 29 06:36:19 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:36:19 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:36:19 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:36:19.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:36:19 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:36:20 compute-0 ceph-mon[74654]: pgmap v715: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:36:20 compute-0 ceph-mon[74654]: pgmap v716: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:36:20 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v717: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:36:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-b91f4c9a7f3fe2b2e02bb0f03bee47ec61febaea53a5267d4e30a0478debfa50-merged.mount: Deactivated successfully.
Nov 29 06:36:21 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:36:21 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:36:21 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:36:21.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:36:21 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 29 06:36:21 compute-0 systemd[1]: Starting man-db-cache-update.service...
Nov 29 06:36:21 compute-0 sshd-session[185605]: Invalid user scan from 162.214.92.14 port 57304
Nov 29 06:36:21 compute-0 sshd-session[185605]: Received disconnect from 162.214.92.14 port 57304:11: Bye Bye [preauth]
Nov 29 06:36:21 compute-0 sshd-session[185605]: Disconnected from invalid user scan 162.214.92.14 port 57304 [preauth]
Nov 29 06:36:21 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:36:21 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:36:21 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:36:21.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:36:22 compute-0 systemd[1]: Reloading.
Nov 29 06:36:22 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v718: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:36:22 compute-0 systemd-rc-local-generator[185688]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 06:36:22 compute-0 systemd-sysv-generator[185692]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 06:36:23 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:36:23 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:36:23 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:36:23.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:36:23 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 29 06:36:23 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:36:23 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:36:23 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:36:23.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:36:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:36:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:36:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:36:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:36:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:36:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:36:24 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v719: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:36:25 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:36:25 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.003000086s ======
Nov 29 06:36:25 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:36:25.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000086s
Nov 29 06:36:25 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:36:25 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:36:25 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:36:25.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:36:26 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v720: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:36:27 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:36:27 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:36:27 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:36:27.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:36:27 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:36:27 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:36:27 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:36:27.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:36:27 compute-0 sshd-session[185703]: Received disconnect from 197.13.24.157 port 57416:11: Bye Bye [preauth]
Nov 29 06:36:27 compute-0 sshd-session[185703]: Disconnected from authenticating user root 197.13.24.157 port 57416 [preauth]
Nov 29 06:36:28 compute-0 sshd-session[185705]: Invalid user packer from 103.143.238.173 port 52922
Nov 29 06:36:28 compute-0 sshd-session[185705]: Received disconnect from 103.143.238.173 port 52922:11: Bye Bye [preauth]
Nov 29 06:36:28 compute-0 sshd-session[185705]: Disconnected from invalid user packer 103.143.238.173 port 52922 [preauth]
Nov 29 06:36:28 compute-0 sudo[185707]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:36:28 compute-0 sudo[185707]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:36:28 compute-0 sudo[185707]: pam_unix(sudo:session): session closed for user root
Nov 29 06:36:28 compute-0 sudo[185732]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:36:28 compute-0 sudo[185732]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:36:28 compute-0 sudo[185732]: pam_unix(sudo:session): session closed for user root
Nov 29 06:36:28 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v721: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:36:28 compute-0 ceph-mds[94810]: mds.beacon.cephfs.compute-0.jzycnf missed beacon ack from the monitors
Nov 29 06:36:29 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:36:29 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:36:29 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:36:29.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:36:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 06:36:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 06:36:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 06:36:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 06:36:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 06:36:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 06:36:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 06:36:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 06:36:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 06:36:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 06:36:29 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:36:29 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:36:29 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:36:29.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:36:29 compute-0 ceph-mon[74654]: pgmap v717: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:36:30 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v722: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:36:31 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:36:31 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:36:31 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:36:31.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:36:31 compute-0 podman[185380]: 2025-11-29 06:36:31.271501319 +0000 UTC m=+13.868783177 container remove 007f58c3507647dce4e1047107cdf8e58e5c7cb37a8698534430f80656c7d020 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_carver, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 29 06:36:31 compute-0 sudo[184633]: pam_unix(sudo:session): session closed for user root
Nov 29 06:36:31 compute-0 systemd[1]: libpod-conmon-007f58c3507647dce4e1047107cdf8e58e5c7cb37a8698534430f80656c7d020.scope: Deactivated successfully.
Nov 29 06:36:31 compute-0 podman[185613]: 2025-11-29 06:36:31.356750744 +0000 UTC m=+10.137432665 container health_status 81ea2bcb89266a0110a379c2083d8cc042460d4a35c8ed3bf349dd1083925000 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 06:36:31 compute-0 sudo[186323]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:36:31 compute-0 sudo[186323]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:36:31 compute-0 sudo[186323]: pam_unix(sudo:session): session closed for user root
Nov 29 06:36:31 compute-0 podman[185623]: 2025-11-29 06:36:31.47986042 +0000 UTC m=+10.253879960 container health_status b3f42e9a710907b47913576d27471d163da731262c1464357cff24681ce600c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 29 06:36:31 compute-0 sudo[186424]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:36:31 compute-0 sudo[186424]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:36:31 compute-0 sudo[186424]: pam_unix(sudo:session): session closed for user root
Nov 29 06:36:31 compute-0 sudo[186500]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:36:31 compute-0 sudo[186500]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:36:31 compute-0 sudo[186500]: pam_unix(sudo:session): session closed for user root
Nov 29 06:36:31 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).mds e10 check_health: resetting beacon timeouts due to mon delay (slow election?) of 11.6906 seconds
Nov 29 06:36:31 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:36:31 compute-0 sudo[186575]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -- raw list --format json
Nov 29 06:36:31 compute-0 sudo[186575]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:36:31 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:36:31 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:36:31 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:36:31.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:36:32 compute-0 podman[186963]: 2025-11-29 06:36:32.110049059 +0000 UTC m=+0.040516668 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:36:32 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v723: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:36:33 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:36:33 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:36:33 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:36:33.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:36:33 compute-0 podman[186963]: 2025-11-29 06:36:33.111609574 +0000 UTC m=+1.042077103 container create 7b683feada6226a3affba0e3b84e9cec8db54d9be00dab37e23b06d446cc8fe8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_goodall, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 29 06:36:33 compute-0 ceph-mon[74654]: pgmap v718: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:36:33 compute-0 ceph-mon[74654]: pgmap v719: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:36:33 compute-0 ceph-mon[74654]: pgmap v720: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:36:33 compute-0 ceph-mon[74654]: pgmap v721: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:36:33 compute-0 ceph-mon[74654]: pgmap v722: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:36:33 compute-0 systemd[1]: Started libpod-conmon-7b683feada6226a3affba0e3b84e9cec8db54d9be00dab37e23b06d446cc8fe8.scope.
Nov 29 06:36:33 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:36:33 compute-0 podman[186963]: 2025-11-29 06:36:33.573357301 +0000 UTC m=+1.503824930 container init 7b683feada6226a3affba0e3b84e9cec8db54d9be00dab37e23b06d446cc8fe8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_goodall, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 29 06:36:33 compute-0 podman[186963]: 2025-11-29 06:36:33.586260313 +0000 UTC m=+1.516727852 container start 7b683feada6226a3affba0e3b84e9cec8db54d9be00dab37e23b06d446cc8fe8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_goodall, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 06:36:33 compute-0 blissful_goodall[187931]: 167 167
Nov 29 06:36:33 compute-0 systemd[1]: libpod-7b683feada6226a3affba0e3b84e9cec8db54d9be00dab37e23b06d446cc8fe8.scope: Deactivated successfully.
Nov 29 06:36:33 compute-0 podman[186963]: 2025-11-29 06:36:33.636420728 +0000 UTC m=+1.566888297 container attach 7b683feada6226a3affba0e3b84e9cec8db54d9be00dab37e23b06d446cc8fe8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_goodall, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 29 06:36:33 compute-0 podman[186963]: 2025-11-29 06:36:33.637474848 +0000 UTC m=+1.567942407 container died 7b683feada6226a3affba0e3b84e9cec8db54d9be00dab37e23b06d446cc8fe8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_goodall, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True)
Nov 29 06:36:33 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:36:33 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:36:33 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:36:33.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:36:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-ee6f149acb625a135be5f68a685a1fa8bcc020eb3afe60d05317e6a34f055d3e-merged.mount: Deactivated successfully.
Nov 29 06:36:34 compute-0 podman[186963]: 2025-11-29 06:36:34.121498058 +0000 UTC m=+2.051965587 container remove 7b683feada6226a3affba0e3b84e9cec8db54d9be00dab37e23b06d446cc8fe8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_goodall, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 06:36:34 compute-0 systemd[1]: libpod-conmon-7b683feada6226a3affba0e3b84e9cec8db54d9be00dab37e23b06d446cc8fe8.scope: Deactivated successfully.
Nov 29 06:36:34 compute-0 ceph-mon[74654]: pgmap v723: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:36:34 compute-0 podman[188691]: 2025-11-29 06:36:34.277316275 +0000 UTC m=+0.026184685 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:36:34 compute-0 podman[188691]: 2025-11-29 06:36:34.418221223 +0000 UTC m=+0.167089653 container create b5933e5477720709bb8410ebc7ca9ffe60b64e3986b680f40782d3223f40a745 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_banzai, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 06:36:34 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v724: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:36:34 compute-0 systemd[1]: Started libpod-conmon-b5933e5477720709bb8410ebc7ca9ffe60b64e3986b680f40782d3223f40a745.scope.
Nov 29 06:36:34 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:36:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1747655b454def29856f4913ffeb1b3dd0734a58846f9cbdd0f9d949e976f6d1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 06:36:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1747655b454def29856f4913ffeb1b3dd0734a58846f9cbdd0f9d949e976f6d1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:36:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1747655b454def29856f4913ffeb1b3dd0734a58846f9cbdd0f9d949e976f6d1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:36:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1747655b454def29856f4913ffeb1b3dd0734a58846f9cbdd0f9d949e976f6d1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 06:36:34 compute-0 podman[188691]: 2025-11-29 06:36:34.593571263 +0000 UTC m=+0.342439703 container init b5933e5477720709bb8410ebc7ca9ffe60b64e3986b680f40782d3223f40a745 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_banzai, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 06:36:34 compute-0 podman[188691]: 2025-11-29 06:36:34.602603574 +0000 UTC m=+0.351471964 container start b5933e5477720709bb8410ebc7ca9ffe60b64e3986b680f40782d3223f40a745 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_banzai, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 29 06:36:34 compute-0 podman[188691]: 2025-11-29 06:36:34.608552085 +0000 UTC m=+0.357420495 container attach b5933e5477720709bb8410ebc7ca9ffe60b64e3986b680f40782d3223f40a745 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_banzai, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 06:36:35 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:36:35 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:36:35 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:36:35.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:36:35 compute-0 sshd-session[188449]: Invalid user laravel from 103.147.159.91 port 53936
Nov 29 06:36:35 compute-0 wonderful_banzai[189018]: {
Nov 29 06:36:35 compute-0 wonderful_banzai[189018]:     "91f280f1-e534-4adc-bf70-98711580c2dd": {
Nov 29 06:36:35 compute-0 wonderful_banzai[189018]:         "ceph_fsid": "336ec58c-893b-528f-a0c1-6ed1196bc047",
Nov 29 06:36:35 compute-0 wonderful_banzai[189018]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 06:36:35 compute-0 wonderful_banzai[189018]:         "osd_id": 1,
Nov 29 06:36:35 compute-0 wonderful_banzai[189018]:         "osd_uuid": "91f280f1-e534-4adc-bf70-98711580c2dd",
Nov 29 06:36:35 compute-0 wonderful_banzai[189018]:         "type": "bluestore"
Nov 29 06:36:35 compute-0 wonderful_banzai[189018]:     }
Nov 29 06:36:35 compute-0 wonderful_banzai[189018]: }
Nov 29 06:36:35 compute-0 systemd[1]: libpod-b5933e5477720709bb8410ebc7ca9ffe60b64e3986b680f40782d3223f40a745.scope: Deactivated successfully.
Nov 29 06:36:35 compute-0 podman[188691]: 2025-11-29 06:36:35.488965741 +0000 UTC m=+1.237834131 container died b5933e5477720709bb8410ebc7ca9ffe60b64e3986b680f40782d3223f40a745 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_banzai, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 06:36:35 compute-0 sshd-session[188449]: Received disconnect from 103.147.159.91 port 53936:11: Bye Bye [preauth]
Nov 29 06:36:35 compute-0 sshd-session[188449]: Disconnected from invalid user laravel 103.147.159.91 port 53936 [preauth]
Nov 29 06:36:35 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:36:35 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:36:35 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:36:35.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:36:36 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v725: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:36:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-1747655b454def29856f4913ffeb1b3dd0734a58846f9cbdd0f9d949e976f6d1-merged.mount: Deactivated successfully.
Nov 29 06:36:36 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:36:37 compute-0 ceph-mon[74654]: pgmap v724: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:36:37 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:36:37 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:36:37 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:36:37.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:36:37 compute-0 podman[188691]: 2025-11-29 06:36:37.376779229 +0000 UTC m=+3.125647659 container remove b5933e5477720709bb8410ebc7ca9ffe60b64e3986b680f40782d3223f40a745 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_banzai, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507)
Nov 29 06:36:37 compute-0 sudo[186575]: pam_unix(sudo:session): session closed for user root
Nov 29 06:36:37 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 06:36:37 compute-0 systemd[1]: libpod-conmon-b5933e5477720709bb8410ebc7ca9ffe60b64e3986b680f40782d3223f40a745.scope: Deactivated successfully.
Nov 29 06:36:37 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:36:37 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:36:37 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:36:37.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:36:37 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:36:37 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 06:36:38 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v726: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:36:39 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:36:39 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:36:39 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:36:39.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:36:39 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:36:39 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:36:39 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:36:39.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:36:40 compute-0 sudo[165279]: pam_unix(sudo:session): session closed for user root
Nov 29 06:36:40 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v727: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:36:40 compute-0 ceph-mon[74654]: pgmap v725: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:36:40 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:36:40 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:36:40 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev a3fa91c9-560a-4b38-9c60-2fcbdb83f66e does not exist
Nov 29 06:36:40 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev e8218213-fa7f-408b-bee5-e1aa80e95216 does not exist
Nov 29 06:36:40 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev f53883b5-e089-42a3-93aa-71c1bfd0eb44 does not exist
Nov 29 06:36:40 compute-0 sudo[194176]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:36:40 compute-0 sudo[194176]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:36:40 compute-0 sudo[194176]: pam_unix(sudo:session): session closed for user root
Nov 29 06:36:40 compute-0 sudo[194308]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iodfvddmuljfjbszlukrrsfwrajkonpo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398200.2703493-973-93123693289060/AnsiballZ_systemd.py'
Nov 29 06:36:40 compute-0 sudo[194308]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:36:40 compute-0 sudo[194294]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 06:36:40 compute-0 sudo[194294]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:36:40 compute-0 sudo[194294]: pam_unix(sudo:session): session closed for user root
Nov 29 06:36:41 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:36:41 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:36:41 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:36:41.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:36:41 compute-0 python3.9[194353]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 29 06:36:41 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:36:41 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:36:41 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:36:41 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:36:41.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:36:41 compute-0 ceph-mon[74654]: pgmap v726: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:36:41 compute-0 ceph-mon[74654]: pgmap v727: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:36:41 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:36:42 compute-0 systemd[1]: Reloading.
Nov 29 06:36:42 compute-0 systemd-rc-local-generator[194627]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 06:36:42 compute-0 systemd-sysv-generator[194631]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 06:36:42 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v728: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:36:42 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 29 06:36:42 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 29 06:36:42 compute-0 systemd[1]: man-db-cache-update.service: Consumed 12.072s CPU time.
Nov 29 06:36:42 compute-0 systemd[1]: run-r82c68b860e11417faf59952e344d78d4.service: Deactivated successfully.
Nov 29 06:36:42 compute-0 sudo[194308]: pam_unix(sudo:session): session closed for user root
Nov 29 06:36:43 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:36:43 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:36:43 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:36:43.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:36:43 compute-0 sudo[194786]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-efnjugmmsytnxnlkrdsoggsyikzfvfxz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398202.8064494-973-68679914291867/AnsiballZ_systemd.py'
Nov 29 06:36:43 compute-0 sudo[194786]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:36:43 compute-0 python3.9[194790]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 29 06:36:43 compute-0 systemd[1]: Reloading.
Nov 29 06:36:43 compute-0 systemd-rc-local-generator[194820]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 06:36:43 compute-0 systemd-sysv-generator[194823]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 06:36:43 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:36:43 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:36:43 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:36:43.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:36:43 compute-0 sudo[194786]: pam_unix(sudo:session): session closed for user root
Nov 29 06:36:43 compute-0 ceph-mon[74654]: pgmap v728: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:36:44 compute-0 sshd-session[194787]: Invalid user mcserver from 49.247.35.31 port 11055
Nov 29 06:36:44 compute-0 sudo[194978]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kisxdogksrbjluohgwqwfgyhvwodwegq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398203.9233768-973-167778233583994/AnsiballZ_systemd.py'
Nov 29 06:36:44 compute-0 sudo[194978]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:36:44 compute-0 sshd-session[194787]: Received disconnect from 49.247.35.31 port 11055:11: Bye Bye [preauth]
Nov 29 06:36:44 compute-0 sshd-session[194787]: Disconnected from invalid user mcserver 49.247.35.31 port 11055 [preauth]
Nov 29 06:36:44 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v729: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:36:44 compute-0 python3.9[194980]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 29 06:36:44 compute-0 systemd[1]: Reloading.
Nov 29 06:36:44 compute-0 systemd-sysv-generator[195008]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 06:36:44 compute-0 systemd-rc-local-generator[195005]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 06:36:45 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:36:45 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:36:45 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:36:45.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:36:45 compute-0 sudo[194978]: pam_unix(sudo:session): session closed for user root
Nov 29 06:36:45 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:36:45 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:36:45 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:36:45.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:36:45 compute-0 ceph-mon[74654]: pgmap v729: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:36:45 compute-0 sudo[195169]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dwpehcackqrygjkkqiopwmyrhkukmjtp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398205.6111324-973-117878908779931/AnsiballZ_systemd.py'
Nov 29 06:36:45 compute-0 sudo[195169]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:36:46 compute-0 python3.9[195171]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 29 06:36:46 compute-0 systemd[1]: Reloading.
Nov 29 06:36:46 compute-0 systemd-rc-local-generator[195197]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 06:36:46 compute-0 systemd-sysv-generator[195203]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 06:36:46 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v730: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:36:46 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:36:46 compute-0 sudo[195169]: pam_unix(sudo:session): session closed for user root
Nov 29 06:36:47 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:36:47 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:36:47 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:36:47.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:36:47 compute-0 ceph-mon[74654]: pgmap v730: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:36:47 compute-0 sudo[195360]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljqkvnbvszkdtuxunzgpsvcwlsjdruti ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398206.8626099-1063-73882943102235/AnsiballZ_systemd.py'
Nov 29 06:36:47 compute-0 sudo[195360]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:36:47 compute-0 python3.9[195362]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 06:36:47 compute-0 systemd[1]: Reloading.
Nov 29 06:36:47 compute-0 systemd-sysv-generator[195393]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 06:36:47 compute-0 systemd-rc-local-generator[195390]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 06:36:47 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:36:47 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:36:47 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:36:47.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:36:48 compute-0 sudo[195360]: pam_unix(sudo:session): session closed for user root
Nov 29 06:36:48 compute-0 sudo[195401]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:36:48 compute-0 sudo[195401]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:36:48 compute-0 sudo[195401]: pam_unix(sudo:session): session closed for user root
Nov 29 06:36:48 compute-0 sudo[195449]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:36:48 compute-0 sudo[195449]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:36:48 compute-0 sudo[195449]: pam_unix(sudo:session): session closed for user root
Nov 29 06:36:48 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v731: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:36:48 compute-0 sudo[195600]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-okvraefnuiawjaudrqpxqzftlzxxhcfb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398208.4156263-1063-234287113637537/AnsiballZ_systemd.py'
Nov 29 06:36:48 compute-0 sudo[195600]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:36:49 compute-0 python3.9[195602]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 06:36:49 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:36:49 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:36:49 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:36:49.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:36:49 compute-0 systemd[1]: Reloading.
Nov 29 06:36:49 compute-0 systemd-rc-local-generator[195634]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 06:36:49 compute-0 systemd-sysv-generator[195638]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 06:36:49 compute-0 sudo[195600]: pam_unix(sudo:session): session closed for user root
Nov 29 06:36:49 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:36:49 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:36:49 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:36:49.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:36:50 compute-0 ceph-mon[74654]: pgmap v731: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:36:50 compute-0 sudo[195793]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-knbmkfqaiacqmdivreuvwezarbjtslrf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398209.8908198-1063-243739327746639/AnsiballZ_systemd.py'
Nov 29 06:36:50 compute-0 sudo[195793]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:36:50 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v732: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:36:50 compute-0 python3.9[195795]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 06:36:50 compute-0 systemd[1]: Reloading.
Nov 29 06:36:50 compute-0 systemd-rc-local-generator[195822]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 06:36:50 compute-0 systemd-sysv-generator[195826]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 06:36:51 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:36:51 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:36:51 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:36:51.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:36:51 compute-0 sudo[195793]: pam_unix(sudo:session): session closed for user root
Nov 29 06:36:51 compute-0 sshd-session[195722]: Invalid user erpnext from 118.193.39.127 port 49540
Nov 29 06:36:51 compute-0 ceph-mon[74654]: pgmap v732: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:36:51 compute-0 sshd-session[195722]: Received disconnect from 118.193.39.127 port 49540:11: Bye Bye [preauth]
Nov 29 06:36:51 compute-0 sshd-session[195722]: Disconnected from invalid user erpnext 118.193.39.127 port 49540 [preauth]
Nov 29 06:36:51 compute-0 sudo[195989]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vsyeqsbazspfouttsevvlrwukncchnix ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398211.2683074-1063-268579189323499/AnsiballZ_systemd.py'
Nov 29 06:36:51 compute-0 sudo[195989]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:36:51 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:36:51 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:36:51 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:36:51 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:36:51.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:36:51 compute-0 python3.9[195991]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 06:36:51 compute-0 sshd-session[195836]: Received disconnect from 176.109.67.96 port 47248:11: Bye Bye [preauth]
Nov 29 06:36:51 compute-0 sshd-session[195836]: Disconnected from authenticating user root 176.109.67.96 port 47248 [preauth]
Nov 29 06:36:51 compute-0 sudo[195989]: pam_unix(sudo:session): session closed for user root
Nov 29 06:36:52 compute-0 sudo[196144]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fypshuxqsscsugfjmjnxzlexzclwflhu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398212.1210048-1063-148198841403657/AnsiballZ_systemd.py'
Nov 29 06:36:52 compute-0 sudo[196144]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:36:52 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v733: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:36:52 compute-0 python3.9[196146]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 06:36:52 compute-0 systemd[1]: Reloading.
Nov 29 06:36:52 compute-0 systemd-sysv-generator[196181]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 06:36:52 compute-0 systemd-rc-local-generator[196178]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 06:36:53 compute-0 sshd-session[195967]: Invalid user demo from 103.63.25.115 port 56824
Nov 29 06:36:53 compute-0 sudo[196144]: pam_unix(sudo:session): session closed for user root
Nov 29 06:36:53 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:36:53 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:36:53 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:36:53.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:36:53 compute-0 sshd-session[195967]: Received disconnect from 103.63.25.115 port 56824:11: Bye Bye [preauth]
Nov 29 06:36:53 compute-0 sshd-session[195967]: Disconnected from invalid user demo 103.63.25.115 port 56824 [preauth]
Nov 29 06:36:53 compute-0 sudo[196336]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-llhivpqohvknaofdmdnlvzvolysyyfkw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398213.2542627-1171-63159451750414/AnsiballZ_systemd.py'
Nov 29 06:36:53 compute-0 sudo[196336]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:36:53 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:36:53 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:36:53 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:36:53.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:36:53 compute-0 python3.9[196338]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 29 06:36:53 compute-0 systemd[1]: Reloading.
Nov 29 06:36:53 compute-0 systemd-rc-local-generator[196371]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 06:36:54 compute-0 systemd-sysv-generator[196374]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 06:36:54 compute-0 ceph-mgr[74948]: [balancer INFO root] Optimize plan auto_2025-11-29_06:36:54
Nov 29 06:36:54 compute-0 ceph-mgr[74948]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 06:36:54 compute-0 ceph-mgr[74948]: [balancer INFO root] do_upmap
Nov 29 06:36:54 compute-0 ceph-mgr[74948]: [balancer INFO root] pools ['.mgr', 'vms', 'volumes', 'cephfs.cephfs.data', 'default.rgw.meta', 'default.rgw.control', '.rgw.root', 'images', 'backups', 'default.rgw.log', 'cephfs.cephfs.meta']
Nov 29 06:36:54 compute-0 ceph-mgr[74948]: [balancer INFO root] prepared 0/10 changes
Nov 29 06:36:54 compute-0 systemd[1]: Listening on libvirt proxy daemon socket.
Nov 29 06:36:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:36:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:36:54 compute-0 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Nov 29 06:36:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:36:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:36:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:36:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:36:54 compute-0 sudo[196336]: pam_unix(sudo:session): session closed for user root
Nov 29 06:36:54 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v734: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:36:55 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:36:55 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:36:55 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:36:55.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:36:55 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:36:55 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:36:55 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:36:55.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:36:56 compute-0 ceph-mon[74654]: pgmap v733: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:36:56 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v735: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:36:56 compute-0 sudo[196530]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zqntvcjdecxqaoqpvppfskytfciyvmsy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398216.3710887-1195-260830752098742/AnsiballZ_systemd.py'
Nov 29 06:36:56 compute-0 sudo[196530]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:36:56 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:36:57 compute-0 python3.9[196532]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 06:36:57 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:36:57 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:36:57 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:36:57.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:36:57 compute-0 sudo[196530]: pam_unix(sudo:session): session closed for user root
Nov 29 06:36:57 compute-0 ceph-mon[74654]: pgmap v734: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:36:57 compute-0 ceph-mon[74654]: pgmap v735: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:36:57 compute-0 ceph-mon[74654]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #27. Immutable memtables: 0.
Nov 29 06:36:57 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:36:57.469551) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 06:36:57 compute-0 ceph-mon[74654]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 27
Nov 29 06:36:57 compute-0 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764398217469638, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 2187, "num_deletes": 251, "total_data_size": 4253493, "memory_usage": 4343856, "flush_reason": "Manual Compaction"}
Nov 29 06:36:57 compute-0 ceph-mon[74654]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #28: started
Nov 29 06:36:57 compute-0 sudo[196686]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ywtvzkfcnppcowmhldcbtfgehjtrdubj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398217.2667484-1195-224441021867500/AnsiballZ_systemd.py'
Nov 29 06:36:57 compute-0 sudo[196686]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:36:57 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:36:57 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:36:57 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:36:57.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:36:57 compute-0 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764398217775054, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 28, "file_size": 4155632, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 10952, "largest_seqno": 13138, "table_properties": {"data_size": 4145714, "index_size": 6348, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 19625, "raw_average_key_size": 20, "raw_value_size": 4125981, "raw_average_value_size": 4210, "num_data_blocks": 284, "num_entries": 980, "num_filter_entries": 980, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764397891, "oldest_key_time": 1764397891, "file_creation_time": 1764398217, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cb6c8f8f-b3b4-4901-9b8e-6f9d7b0da908", "db_session_id": "VL4WOW4AK06DDHF5VQBP", "orig_file_number": 28, "seqno_to_time_mapping": "N/A"}}
Nov 29 06:36:57 compute-0 ceph-mon[74654]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 305542 microseconds, and 24869 cpu microseconds.
Nov 29 06:36:57 compute-0 ceph-mon[74654]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 06:36:57 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:36:57.775095) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #28: 4155632 bytes OK
Nov 29 06:36:57 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:36:57.775113) [db/memtable_list.cc:519] [default] Level-0 commit table #28 started
Nov 29 06:36:57 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:36:57.803773) [db/memtable_list.cc:722] [default] Level-0 commit table #28: memtable #1 done
Nov 29 06:36:57 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:36:57.803827) EVENT_LOG_v1 {"time_micros": 1764398217803817, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 06:36:57 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:36:57.803851) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 06:36:57 compute-0 ceph-mon[74654]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 4244737, prev total WAL file size 4244737, number of live WAL files 2.
Nov 29 06:36:57 compute-0 ceph-mon[74654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000024.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 06:36:57 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:36:57.805522) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Nov 29 06:36:57 compute-0 ceph-mon[74654]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 06:36:57 compute-0 ceph-mon[74654]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [28(4058KB)], [26(9323KB)]
Nov 29 06:36:57 compute-0 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764398217805644, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [28], "files_L6": [26], "score": -1, "input_data_size": 13702761, "oldest_snapshot_seqno": -1}
Nov 29 06:36:57 compute-0 python3.9[196688]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 06:36:57 compute-0 ceph-mon[74654]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #29: 4461 keys, 10609626 bytes, temperature: kUnknown
Nov 29 06:36:57 compute-0 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764398217921213, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 29, "file_size": 10609626, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10574832, "index_size": 22524, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11205, "raw_key_size": 109117, "raw_average_key_size": 24, "raw_value_size": 10489312, "raw_average_value_size": 2351, "num_data_blocks": 972, "num_entries": 4461, "num_filter_entries": 4461, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764396963, "oldest_key_time": 0, "file_creation_time": 1764398217, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cb6c8f8f-b3b4-4901-9b8e-6f9d7b0da908", "db_session_id": "VL4WOW4AK06DDHF5VQBP", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Nov 29 06:36:57 compute-0 ceph-mon[74654]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 06:36:57 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:36:57.921564) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 10609626 bytes
Nov 29 06:36:57 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:36:57.922974) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 118.4 rd, 91.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(4.0, 9.1 +0.0 blob) out(10.1 +0.0 blob), read-write-amplify(5.9) write-amplify(2.6) OK, records in: 4979, records dropped: 518 output_compression: NoCompression
Nov 29 06:36:57 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:36:57.923006) EVENT_LOG_v1 {"time_micros": 1764398217922990, "job": 10, "event": "compaction_finished", "compaction_time_micros": 115702, "compaction_time_cpu_micros": 52761, "output_level": 6, "num_output_files": 1, "total_output_size": 10609626, "num_input_records": 4979, "num_output_records": 4461, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 06:36:57 compute-0 ceph-mon[74654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000028.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 06:36:57 compute-0 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764398217924301, "job": 10, "event": "table_file_deletion", "file_number": 28}
Nov 29 06:36:57 compute-0 ceph-mon[74654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 06:36:57 compute-0 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764398217927572, "job": 10, "event": "table_file_deletion", "file_number": 26}
Nov 29 06:36:57 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:36:57.805334) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 06:36:57 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:36:57.927651) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 06:36:57 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:36:57.927656) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 06:36:57 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:36:57.927658) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 06:36:57 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:36:57.927659) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 06:36:57 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:36:57.927661) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 06:36:57 compute-0 sudo[196686]: pam_unix(sudo:session): session closed for user root
Nov 29 06:36:58 compute-0 sudo[196841]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pniqrqjsngtqhalvlvrcotnxomgkhlml ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398218.1345215-1195-237879163602330/AnsiballZ_systemd.py'
Nov 29 06:36:58 compute-0 sudo[196841]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:36:58 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v736: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:36:58 compute-0 python3.9[196843]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 06:36:58 compute-0 sudo[196841]: pam_unix(sudo:session): session closed for user root
Nov 29 06:36:59 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:36:59 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:36:59 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:36:59.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:36:59 compute-0 sudo[196997]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jrbeccknxsdhfbctsdmaqangnzzxginm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398218.999266-1195-227495102469910/AnsiballZ_systemd.py'
Nov 29 06:36:59 compute-0 sudo[196997]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:36:59 compute-0 python3.9[196999]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 06:36:59 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:36:59 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:36:59 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:36:59.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:37:00 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v737: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:37:00 compute-0 sudo[196997]: pam_unix(sudo:session): session closed for user root
Nov 29 06:37:01 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:37:01 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:37:01 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:37:01.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:37:01 compute-0 sudo[197153]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eofxbcdkamnficiulksmbqwoecaicsbl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398220.8558497-1195-276717055633930/AnsiballZ_systemd.py'
Nov 29 06:37:01 compute-0 sudo[197153]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:37:01 compute-0 python3.9[197155]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 06:37:01 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:37:01 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:37:01 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:37:01.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:37:01 compute-0 sudo[197153]: pam_unix(sudo:session): session closed for user root
Nov 29 06:37:01 compute-0 podman[197157]: 2025-11-29 06:37:01.803980196 +0000 UTC m=+0.071397157 container health_status 81ea2bcb89266a0110a379c2083d8cc042460d4a35c8ed3bf349dd1083925000 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 06:37:01 compute-0 podman[197158]: 2025-11-29 06:37:01.837864962 +0000 UTC m=+0.098348241 container health_status b3f42e9a710907b47913576d27471d163da731262c1464357cff24681ce600c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251125)
Nov 29 06:37:01 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:37:01 compute-0 ceph-mon[74654]: pgmap v736: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:37:02 compute-0 sudo[197352]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vfhwzsfdpbkqddeebofnfdfwwsubeynn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398221.9212403-1195-280482402982470/AnsiballZ_systemd.py'
Nov 29 06:37:02 compute-0 sudo[197352]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:37:02 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v738: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:37:02 compute-0 python3.9[197354]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 06:37:02 compute-0 sudo[197352]: pam_unix(sudo:session): session closed for user root
Nov 29 06:37:03 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:37:03 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:37:03 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:37:03.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:37:03 compute-0 sudo[197508]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uccfrbnvpvmjlyqmkjxtdgbycwqdyxpl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398222.7864625-1195-229527960749277/AnsiballZ_systemd.py'
Nov 29 06:37:03 compute-0 sudo[197508]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:37:03 compute-0 python3.9[197510]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 06:37:03 compute-0 ceph-mon[74654]: pgmap v737: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:37:03 compute-0 sudo[197508]: pam_unix(sudo:session): session closed for user root
Nov 29 06:37:03 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:37:03 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:37:03 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:37:03.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:37:04 compute-0 sudo[197663]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-azfhgfqopfdwtozrubmxdzhzskaumqwo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398223.6739278-1195-203451295628306/AnsiballZ_systemd.py'
Nov 29 06:37:04 compute-0 sudo[197663]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:37:04 compute-0 python3.9[197665]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 06:37:04 compute-0 sudo[197663]: pam_unix(sudo:session): session closed for user root
Nov 29 06:37:04 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v739: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:37:04 compute-0 ceph-mon[74654]: pgmap v738: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:37:04 compute-0 sudo[197818]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qgixklvgfqixstkucohidoezmutctmpb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398224.5641193-1195-266990943982776/AnsiballZ_systemd.py'
Nov 29 06:37:04 compute-0 sudo[197818]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:37:05 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:37:05 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:37:05 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:37:05.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:37:05 compute-0 python3.9[197820]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 06:37:05 compute-0 sudo[197818]: pam_unix(sudo:session): session closed for user root
Nov 29 06:37:05 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:37:05 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:37:05 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:37:05.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:37:05 compute-0 sudo[197974]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rztmzniznfpfsqzzbeugewihmuedsooo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398225.4134834-1195-107187314857649/AnsiballZ_systemd.py'
Nov 29 06:37:05 compute-0 sudo[197974]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:37:06 compute-0 python3.9[197976]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 06:37:06 compute-0 ceph-mon[74654]: pgmap v739: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:37:06 compute-0 sudo[197974]: pam_unix(sudo:session): session closed for user root
Nov 29 06:37:06 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v740: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:37:06 compute-0 sudo[198129]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zeomyjkmdewyesqiwigqqmecmvwowuev ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398226.3992465-1195-217865177840235/AnsiballZ_systemd.py'
Nov 29 06:37:06 compute-0 sudo[198129]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:37:06 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:37:07 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:37:07 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:37:07 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:37:07.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:37:07 compute-0 python3.9[198131]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 06:37:07 compute-0 sudo[198129]: pam_unix(sudo:session): session closed for user root
Nov 29 06:37:07 compute-0 ceph-mon[74654]: pgmap v740: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:37:07 compute-0 sudo[198285]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gdxpgvekoakzogxxjfcavnreqdgpktjk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398227.4176745-1195-210976982520950/AnsiballZ_systemd.py'
Nov 29 06:37:07 compute-0 sudo[198285]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:37:07 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:37:07 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:37:07 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:37:07.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:37:08 compute-0 python3.9[198287]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 06:37:08 compute-0 sudo[198285]: pam_unix(sudo:session): session closed for user root
Nov 29 06:37:08 compute-0 sudo[198371]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:37:08 compute-0 sudo[198371]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:37:08 compute-0 sudo[198371]: pam_unix(sudo:session): session closed for user root
Nov 29 06:37:08 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v741: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:37:08 compute-0 sudo[198415]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:37:08 compute-0 sudo[198415]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:37:08 compute-0 sudo[198415]: pam_unix(sudo:session): session closed for user root
Nov 29 06:37:08 compute-0 sudo[198490]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-epzzbnjzklcenjxwkrhpwjcaehfugyrm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398228.3036308-1195-187273081984074/AnsiballZ_systemd.py'
Nov 29 06:37:08 compute-0 sudo[198490]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:37:09 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:37:09 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:37:09 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:37:09.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:37:09 compute-0 python3.9[198492]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 06:37:09 compute-0 sudo[198490]: pam_unix(sudo:session): session closed for user root
Nov 29 06:37:09 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:37:09 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:37:09 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:37:09.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:37:09 compute-0 sudo[198646]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-acqwcyenhjfubpqfdqegnqqwdwdgkbuy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398229.4397376-1195-44785194536923/AnsiballZ_systemd.py'
Nov 29 06:37:09 compute-0 sudo[198646]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:37:10 compute-0 ceph-mon[74654]: pgmap v741: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:37:10 compute-0 python3.9[198648]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 06:37:10 compute-0 sudo[198646]: pam_unix(sudo:session): session closed for user root
Nov 29 06:37:10 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v742: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:37:10 compute-0 sudo[198802]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ycssnjkmilrkmupzujzvwbnuictxrsyb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398230.6585016-1501-14749826119650/AnsiballZ_file.py'
Nov 29 06:37:10 compute-0 sudo[198802]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:37:11 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:37:11 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:37:11 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:37:11.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:37:11 compute-0 python3.9[198804]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 29 06:37:11 compute-0 sudo[198802]: pam_unix(sudo:session): session closed for user root
Nov 29 06:37:11 compute-0 sudo[198954]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bplptjhsyvkvzpuauxsjwrgnexiuokod ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398231.3747568-1501-91411711051470/AnsiballZ_file.py'
Nov 29 06:37:11 compute-0 sudo[198954]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:37:11 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:37:11 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:37:11 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:37:11.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:37:11 compute-0 python3.9[198956]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 29 06:37:11 compute-0 sudo[198954]: pam_unix(sudo:session): session closed for user root
Nov 29 06:37:11 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:37:12 compute-0 sudo[199106]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mbwrmtbveskmddgstvcbdelthbnszdqe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398232.0197132-1501-128449613691151/AnsiballZ_file.py'
Nov 29 06:37:12 compute-0 sudo[199106]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:37:12 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v743: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:37:12 compute-0 python3.9[199108]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 06:37:12 compute-0 sudo[199106]: pam_unix(sudo:session): session closed for user root
Nov 29 06:37:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 06:37:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:37:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 06:37:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:37:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:37:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:37:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:37:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:37:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:37:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:37:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:37:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:37:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 06:37:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:37:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:37:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:37:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Nov 29 06:37:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:37:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 06:37:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:37:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:37:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:37:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 06:37:13 compute-0 sudo[199259]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lkreimgsrcjrptjuwocojfimuhixjdcd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398232.729304-1501-125992687859745/AnsiballZ_file.py'
Nov 29 06:37:13 compute-0 sudo[199259]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:37:13 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:37:13 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:37:13 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:37:13.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:37:13 compute-0 python3.9[199261]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 06:37:13 compute-0 sudo[199259]: pam_unix(sudo:session): session closed for user root
Nov 29 06:37:13 compute-0 ceph-mon[74654]: pgmap v742: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:37:13 compute-0 sudo[199411]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cgeasmtoblhfrsmmrtguozyrpvrabjsl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398233.4274786-1501-67227804360573/AnsiballZ_file.py'
Nov 29 06:37:13 compute-0 sudo[199411]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:37:13 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:37:13 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:37:13 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:37:13.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:37:13 compute-0 python3.9[199413]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 06:37:13 compute-0 sudo[199411]: pam_unix(sudo:session): session closed for user root
Nov 29 06:37:14 compute-0 sudo[199563]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nihaqxyprwblwqyvcjykdlwmckswjaej ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398234.128324-1501-164248685836193/AnsiballZ_file.py'
Nov 29 06:37:14 compute-0 sudo[199563]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:37:14 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v744: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:37:14 compute-0 ceph-mon[74654]: pgmap v743: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:37:14 compute-0 python3.9[199565]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 29 06:37:14 compute-0 sudo[199563]: pam_unix(sudo:session): session closed for user root
Nov 29 06:37:15 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:37:15 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:37:15 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:37:15.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:37:15 compute-0 sudo[199716]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sexsomlnegbmddkxtyvzexphdbfrflkr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398234.9088657-1630-170388195490610/AnsiballZ_stat.py'
Nov 29 06:37:15 compute-0 sudo[199716]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:37:15 compute-0 python3.9[199718]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:37:15 compute-0 ceph-mon[74654]: pgmap v744: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:37:15 compute-0 sudo[199716]: pam_unix(sudo:session): session closed for user root
Nov 29 06:37:15 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:37:15 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:37:15 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:37:15.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:37:16 compute-0 sudo[199841]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbdjwanlzdzhrmyepheiklbgntqought ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398234.9088657-1630-170388195490610/AnsiballZ_copy.py'
Nov 29 06:37:16 compute-0 sudo[199841]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:37:16 compute-0 python3.9[199843]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764398234.9088657-1630-170388195490610/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:37:16 compute-0 sudo[199841]: pam_unix(sudo:session): session closed for user root
Nov 29 06:37:16 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v745: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:37:16 compute-0 sudo[199993]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ebrgwbimdvrrtbdjchksyqwaoirkpiqi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398236.380556-1630-47103561422678/AnsiballZ_stat.py'
Nov 29 06:37:16 compute-0 sudo[199993]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:37:16 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:37:16 compute-0 python3.9[199995]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:37:16 compute-0 sudo[199993]: pam_unix(sudo:session): session closed for user root
Nov 29 06:37:17 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:37:17 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:37:17 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:37:17.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:37:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:37:17.222 157767 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 06:37:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:37:17.223 157767 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 06:37:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:37:17.223 157767 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 06:37:17 compute-0 sudo[200119]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjzkthyaegoxfhtivgisjjtotsdcnwwq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398236.380556-1630-47103561422678/AnsiballZ_copy.py'
Nov 29 06:37:17 compute-0 sudo[200119]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:37:17 compute-0 python3.9[200121]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764398236.380556-1630-47103561422678/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:37:17 compute-0 sudo[200119]: pam_unix(sudo:session): session closed for user root
Nov 29 06:37:17 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:37:17 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:37:17 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:37:17.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:37:17 compute-0 sudo[200271]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bircfgmhqqfnwmynqiklbifqhkzcxfhj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398237.6762164-1630-50679995799330/AnsiballZ_stat.py'
Nov 29 06:37:17 compute-0 sudo[200271]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:37:18 compute-0 python3.9[200273]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:37:18 compute-0 sudo[200271]: pam_unix(sudo:session): session closed for user root
Nov 29 06:37:18 compute-0 ceph-mon[74654]: pgmap v745: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:37:18 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v746: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:37:18 compute-0 sudo[200396]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rbaiometvqjekmzkmsxaqwbzipnjpgoe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398237.6762164-1630-50679995799330/AnsiballZ_copy.py'
Nov 29 06:37:18 compute-0 sudo[200396]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:37:18 compute-0 python3.9[200398]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764398237.6762164-1630-50679995799330/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:37:18 compute-0 sudo[200396]: pam_unix(sudo:session): session closed for user root
Nov 29 06:37:19 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:37:19 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:37:19 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:37:19.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:37:19 compute-0 sudo[200551]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-leucgfnwewtovsmirvdxdfejudayaosx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398238.9253066-1630-83226004075043/AnsiballZ_stat.py'
Nov 29 06:37:19 compute-0 sudo[200551]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:37:19 compute-0 python3.9[200553]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:37:19 compute-0 sudo[200551]: pam_unix(sudo:session): session closed for user root
Nov 29 06:37:19 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:37:19 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:37:19 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:37:19.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:37:19 compute-0 ceph-mon[74654]: pgmap v746: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:37:19 compute-0 sudo[200676]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwvzutsmehxbinjfaggzhwgkkvtwquho ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398238.9253066-1630-83226004075043/AnsiballZ_copy.py'
Nov 29 06:37:19 compute-0 sudo[200676]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:37:20 compute-0 python3.9[200678]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764398238.9253066-1630-83226004075043/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:37:20 compute-0 sudo[200676]: pam_unix(sudo:session): session closed for user root
Nov 29 06:37:20 compute-0 sshd-session[200476]: Invalid user packer from 34.92.81.41 port 58842
Nov 29 06:37:20 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v747: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:37:20 compute-0 sshd-session[200476]: Received disconnect from 34.92.81.41 port 58842:11: Bye Bye [preauth]
Nov 29 06:37:20 compute-0 sshd-session[200476]: Disconnected from invalid user packer 34.92.81.41 port 58842 [preauth]
Nov 29 06:37:20 compute-0 sudo[200828]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhkchtvzyhlfodxmeocpcbkglicscrku ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398240.220464-1630-210046959776749/AnsiballZ_stat.py'
Nov 29 06:37:20 compute-0 sudo[200828]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:37:20 compute-0 python3.9[200830]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:37:20 compute-0 sudo[200828]: pam_unix(sudo:session): session closed for user root
Nov 29 06:37:21 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:37:21 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:37:21 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:37:21.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:37:21 compute-0 sudo[200954]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bfqqhhxelzzqrbkoakapgmgjyfuabcfq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398240.220464-1630-210046959776749/AnsiballZ_copy.py'
Nov 29 06:37:21 compute-0 sudo[200954]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:37:21 compute-0 python3.9[200956]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764398240.220464-1630-210046959776749/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:37:21 compute-0 sudo[200954]: pam_unix(sudo:session): session closed for user root
Nov 29 06:37:21 compute-0 ceph-mon[74654]: pgmap v747: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:37:21 compute-0 sudo[201106]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jbbfstyuoflsxpfayhqwtcshtqvpukwm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398241.5333674-1630-16078631982296/AnsiballZ_stat.py'
Nov 29 06:37:21 compute-0 sudo[201106]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:37:21 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:37:21 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:37:21 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:37:21.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:37:21 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:37:22 compute-0 python3.9[201108]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:37:22 compute-0 sudo[201106]: pam_unix(sudo:session): session closed for user root
Nov 29 06:37:22 compute-0 sudo[201231]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pzwvydsyxlftfukyvnnlcfjlgvglmbsy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398241.5333674-1630-16078631982296/AnsiballZ_copy.py'
Nov 29 06:37:22 compute-0 sudo[201231]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:37:22 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v748: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:37:22 compute-0 python3.9[201233]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764398241.5333674-1630-16078631982296/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:37:22 compute-0 sudo[201231]: pam_unix(sudo:session): session closed for user root
Nov 29 06:37:22 compute-0 sudo[201384]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nolreyifbyrkxpgcaawakyvqmzvbxbcp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398242.7243857-1630-84063304560593/AnsiballZ_stat.py'
Nov 29 06:37:22 compute-0 sudo[201384]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:37:23 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:37:23 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:37:23 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:37:23.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:37:23 compute-0 python3.9[201386]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:37:23 compute-0 sudo[201384]: pam_unix(sudo:session): session closed for user root
Nov 29 06:37:23 compute-0 sudo[201509]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-njtnbgwvfthwcisdstjpbbqnukfoljex ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398242.7243857-1630-84063304560593/AnsiballZ_copy.py'
Nov 29 06:37:23 compute-0 sudo[201509]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:37:23 compute-0 sshd-session[201387]: Invalid user admin from 31.6.212.12 port 46896
Nov 29 06:37:23 compute-0 python3.9[201511]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764398242.7243857-1630-84063304560593/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:37:23 compute-0 sudo[201509]: pam_unix(sudo:session): session closed for user root
Nov 29 06:37:23 compute-0 sshd-session[201387]: Received disconnect from 31.6.212.12 port 46896:11: Bye Bye [preauth]
Nov 29 06:37:23 compute-0 sshd-session[201387]: Disconnected from invalid user admin 31.6.212.12 port 46896 [preauth]
Nov 29 06:37:23 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:37:23 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:37:23 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:37:23.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:37:24 compute-0 sudo[201661]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ctzrzdjbqmuzixdfqpsvhmockzupaken ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398243.9052818-1630-62585285579945/AnsiballZ_stat.py'
Nov 29 06:37:24 compute-0 sudo[201661]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:37:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:37:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:37:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:37:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:37:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:37:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:37:24 compute-0 python3.9[201663]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:37:24 compute-0 sudo[201661]: pam_unix(sudo:session): session closed for user root
Nov 29 06:37:24 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v749: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:37:24 compute-0 sudo[201788]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ypfsqbehcuwgsoooqzrqtzwrsugkuxyg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398243.9052818-1630-62585285579945/AnsiballZ_copy.py'
Nov 29 06:37:24 compute-0 sudo[201788]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:37:24 compute-0 sshd-session[201689]: Invalid user thomas from 162.214.92.14 port 56466
Nov 29 06:37:25 compute-0 ceph-mon[74654]: pgmap v748: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:37:25 compute-0 sshd-session[201689]: Received disconnect from 162.214.92.14 port 56466:11: Bye Bye [preauth]
Nov 29 06:37:25 compute-0 sshd-session[201689]: Disconnected from invalid user thomas 162.214.92.14 port 56466 [preauth]
Nov 29 06:37:25 compute-0 python3.9[201790]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764398243.9052818-1630-62585285579945/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:37:25 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:37:25 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:37:25 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:37:25.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:37:25 compute-0 sudo[201788]: pam_unix(sudo:session): session closed for user root
Nov 29 06:37:25 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:37:25 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:37:25 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:37:25.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:37:26 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v750: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:37:26 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:37:26 compute-0 sudo[201942]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cqlwqrmhhbofrxeztvfzwuhfbtenpgkr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398246.6159208-1969-75618407905814/AnsiballZ_command.py'
Nov 29 06:37:26 compute-0 sudo[201942]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:37:27 compute-0 python3.9[201944]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Nov 29 06:37:27 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:37:27 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:37:27 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:37:27.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:37:27 compute-0 sudo[201942]: pam_unix(sudo:session): session closed for user root
Nov 29 06:37:27 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:37:27 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:37:27 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:37:27.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:37:28 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v751: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:37:28 compute-0 sudo[201970]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:37:28 compute-0 sudo[201970]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:37:28 compute-0 sudo[201970]: pam_unix(sudo:session): session closed for user root
Nov 29 06:37:28 compute-0 sudo[201995]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:37:28 compute-0 sudo[201995]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:37:28 compute-0 sudo[201995]: pam_unix(sudo:session): session closed for user root
Nov 29 06:37:29 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:37:29 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:37:29 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:37:29.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:37:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 06:37:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 06:37:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 06:37:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 06:37:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 06:37:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 06:37:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 06:37:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 06:37:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 06:37:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 06:37:29 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:37:29 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:37:29 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:37:29.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:37:30 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v752: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:37:30 compute-0 sudo[202146]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dozziqcammsllctjxbogkbqrbnxiqglv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398250.501638-1996-42563014808803/AnsiballZ_file.py'
Nov 29 06:37:30 compute-0 sudo[202146]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:37:31 compute-0 python3.9[202148]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:37:31 compute-0 sudo[202146]: pam_unix(sudo:session): session closed for user root
Nov 29 06:37:31 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:37:31 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:37:31 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:37:31.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:37:31 compute-0 ceph-mon[74654]: pgmap v749: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:37:31 compute-0 sudo[202299]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ptcscrvyicdivkcrsjvjrvpvhnzysuev ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398251.2662833-1996-6343181496379/AnsiballZ_file.py'
Nov 29 06:37:31 compute-0 sudo[202299]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:37:31 compute-0 python3.9[202301]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:37:31 compute-0 sudo[202299]: pam_unix(sudo:session): session closed for user root
Nov 29 06:37:31 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:37:31 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:37:31 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:37:31.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:37:32 compute-0 podman[202349]: 2025-11-29 06:37:32.125062927 +0000 UTC m=+0.083476259 container health_status 81ea2bcb89266a0110a379c2083d8cc042460d4a35c8ed3bf349dd1083925000 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 06:37:32 compute-0 podman[202350]: 2025-11-29 06:37:32.162860406 +0000 UTC m=+0.118753564 container health_status b3f42e9a710907b47913576d27471d163da731262c1464357cff24681ce600c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, org.label-schema.build-date=20251125)
Nov 29 06:37:32 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:37:32 compute-0 sudo[202496]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bzftwqyvsnorymcrmezhalgyqcvqukfm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398252.019074-1996-64383316327606/AnsiballZ_file.py'
Nov 29 06:37:32 compute-0 sudo[202496]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:37:32 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v753: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:37:32 compute-0 python3.9[202498]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:37:32 compute-0 sudo[202496]: pam_unix(sudo:session): session closed for user root
Nov 29 06:37:33 compute-0 sudo[202649]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-viiongngjqqkaorecekayuhkplvnodmo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398252.7915053-1996-37998597688090/AnsiballZ_file.py'
Nov 29 06:37:33 compute-0 sudo[202649]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:37:33 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:37:33 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:37:33 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:37:33.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:37:33 compute-0 python3.9[202651]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:37:33 compute-0 sudo[202649]: pam_unix(sudo:session): session closed for user root
Nov 29 06:37:33 compute-0 sudo[202801]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xkhssljvgfpvhqiiuzovjqumttnkqljo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398253.5202923-1996-222421827020253/AnsiballZ_file.py'
Nov 29 06:37:33 compute-0 sudo[202801]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:37:33 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:37:33 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:37:33 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:37:33.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:37:34 compute-0 python3.9[202803]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:37:34 compute-0 sudo[202801]: pam_unix(sudo:session): session closed for user root
Nov 29 06:37:34 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v754: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:37:34 compute-0 sudo[202953]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fyhwtjrelixlagtpvrvsrrybltutwwap ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398254.2959719-1996-3027432584677/AnsiballZ_file.py'
Nov 29 06:37:34 compute-0 sudo[202953]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:37:35 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:37:35 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:37:35 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:37:35.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:37:35 compute-0 python3.9[202955]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:37:35 compute-0 sudo[202953]: pam_unix(sudo:session): session closed for user root
Nov 29 06:37:35 compute-0 ceph-mon[74654]: pgmap v750: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:37:35 compute-0 ceph-mon[74654]: pgmap v751: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:37:35 compute-0 ceph-mon[74654]: pgmap v752: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:37:35 compute-0 sudo[203106]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ckamiynzohvwkizdfeeguyvqrqjfvvre ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398255.4010193-1996-86092648933753/AnsiballZ_file.py'
Nov 29 06:37:35 compute-0 sudo[203106]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:37:35 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:37:35 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:37:35 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:37:35.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:37:35 compute-0 python3.9[203108]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:37:35 compute-0 sudo[203106]: pam_unix(sudo:session): session closed for user root
Nov 29 06:37:36 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v755: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:37:36 compute-0 sudo[203260]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-omdantegdijvzhlzdkiqyjgahlxvmvue ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398256.1569552-1996-119580050158643/AnsiballZ_file.py'
Nov 29 06:37:36 compute-0 sudo[203260]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:37:36 compute-0 python3.9[203262]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:37:36 compute-0 sudo[203260]: pam_unix(sudo:session): session closed for user root
Nov 29 06:37:37 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:37:37 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:37:37 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:37:37.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:37:37 compute-0 sshd-session[203221]: Received disconnect from 103.143.238.173 port 41962:11: Bye Bye [preauth]
Nov 29 06:37:37 compute-0 sshd-session[203221]: Disconnected from authenticating user root 103.143.238.173 port 41962 [preauth]
Nov 29 06:37:37 compute-0 sudo[203413]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-scangaijcgjilprylarpidlbsszsxulc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398257.147904-1996-66664195138234/AnsiballZ_file.py'
Nov 29 06:37:37 compute-0 sudo[203413]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:37:37 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:37:37 compute-0 python3.9[203415]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:37:37 compute-0 sudo[203413]: pam_unix(sudo:session): session closed for user root
Nov 29 06:37:37 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:37:37 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:37:37 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:37:37.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:37:38 compute-0 sudo[203565]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zyzfzbfqqrggbzarqdmhmaajpdfvdbrf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398257.8898308-1996-240226710716395/AnsiballZ_file.py'
Nov 29 06:37:38 compute-0 sudo[203565]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:37:38 compute-0 python3.9[203567]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:37:38 compute-0 sudo[203565]: pam_unix(sudo:session): session closed for user root
Nov 29 06:37:38 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v756: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:37:38 compute-0 ceph-mon[74654]: pgmap v753: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:37:38 compute-0 ceph-mon[74654]: pgmap v754: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:37:39 compute-0 sudo[203718]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-acyiypkpygcqfpnoroaealmzzdwidffk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398258.707759-1996-181794918425826/AnsiballZ_file.py'
Nov 29 06:37:39 compute-0 sudo[203718]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:37:39 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:37:39 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:37:39 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:37:39.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:37:39 compute-0 python3.9[203720]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:37:39 compute-0 sudo[203718]: pam_unix(sudo:session): session closed for user root
Nov 29 06:37:39 compute-0 sudo[203870]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yhkezgfwmqncldrkcuslaffjolxhdhkc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398259.4691463-1996-165967150390178/AnsiballZ_file.py'
Nov 29 06:37:39 compute-0 sudo[203870]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:37:39 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:37:39 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:37:39 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:37:39.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:37:40 compute-0 python3.9[203872]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:37:40 compute-0 sudo[203870]: pam_unix(sudo:session): session closed for user root
Nov 29 06:37:40 compute-0 sudo[204022]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqxhqjhwgqjqibulyddnpsdbccwcpdnd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398260.205799-1996-199899918483593/AnsiballZ_file.py'
Nov 29 06:37:40 compute-0 sudo[204022]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:37:40 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v757: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:37:40 compute-0 python3.9[204024]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:37:40 compute-0 ceph-mon[74654]: pgmap v755: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:37:40 compute-0 ceph-mon[74654]: pgmap v756: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:37:40 compute-0 sudo[204022]: pam_unix(sudo:session): session closed for user root
Nov 29 06:37:41 compute-0 sudo[204179]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yjmdjxuqlvteymswjykgtqilkzbchhbz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398260.9175842-1996-271388346168964/AnsiballZ_file.py'
Nov 29 06:37:41 compute-0 sudo[204179]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:37:41 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:37:41 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:37:41 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:37:41.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:37:41 compute-0 sudo[204182]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:37:41 compute-0 sudo[204182]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:37:41 compute-0 sudo[204182]: pam_unix(sudo:session): session closed for user root
Nov 29 06:37:41 compute-0 sudo[204207]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:37:41 compute-0 sudo[204207]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:37:41 compute-0 sudo[204207]: pam_unix(sudo:session): session closed for user root
Nov 29 06:37:41 compute-0 python3.9[204181]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:37:41 compute-0 sudo[204232]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:37:41 compute-0 sudo[204232]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:37:41 compute-0 sudo[204232]: pam_unix(sudo:session): session closed for user root
Nov 29 06:37:41 compute-0 sudo[204179]: pam_unix(sudo:session): session closed for user root
Nov 29 06:37:41 compute-0 sudo[204257]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 29 06:37:41 compute-0 sudo[204257]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:37:41 compute-0 sshd-session[204025]: Invalid user ftptest from 197.13.24.157 port 52956
Nov 29 06:37:41 compute-0 sshd-session[204025]: Received disconnect from 197.13.24.157 port 52956:11: Bye Bye [preauth]
Nov 29 06:37:41 compute-0 sshd-session[204025]: Disconnected from invalid user ftptest 197.13.24.157 port 52956 [preauth]
Nov 29 06:37:41 compute-0 sudo[204500]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gogvxtwqocwxuzzsviynuonfqzqotcmz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398261.5576751-2293-47691541546331/AnsiballZ_stat.py'
Nov 29 06:37:41 compute-0 sudo[204500]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:37:41 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:37:41 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:37:41 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:37:41.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:37:42 compute-0 python3.9[204509]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:37:42 compute-0 sudo[204500]: pam_unix(sudo:session): session closed for user root
Nov 29 06:37:42 compute-0 sshd-session[204027]: Received disconnect from 27.112.78.245 port 38236:11: Bye Bye [preauth]
Nov 29 06:37:42 compute-0 sshd-session[204027]: Disconnected from authenticating user root 27.112.78.245 port 38236 [preauth]
Nov 29 06:37:42 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:37:42 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v758: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:37:42 compute-0 sudo[204637]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qcwbbjvzbmxsazfqgzyqrebyzkwgxklz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398261.5576751-2293-47691541546331/AnsiballZ_copy.py'
Nov 29 06:37:42 compute-0 sudo[204637]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:37:42 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 06:37:42 compute-0 podman[204501]: 2025-11-29 06:37:42.697042083 +0000 UTC m=+0.830715326 container exec c3c8680245c67f710ba1b448e2d4c77c4c02bc368d31276f0332ad942957e3cf (image=quay.io/ceph/ceph:v18, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 29 06:37:42 compute-0 podman[204501]: 2025-11-29 06:37:42.797267007 +0000 UTC m=+0.930940210 container exec_died c3c8680245c67f710ba1b448e2d4c77c4c02bc368d31276f0332ad942957e3cf (image=quay.io/ceph/ceph:v18, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mon-compute-0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 29 06:37:42 compute-0 python3.9[204639]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764398261.5576751-2293-47691541546331/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:37:42 compute-0 sudo[204637]: pam_unix(sudo:session): session closed for user root
Nov 29 06:37:42 compute-0 ceph-mon[74654]: pgmap v757: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:37:43 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:37:43 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:37:43 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:37:43.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:37:43 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:37:43 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 06:37:43 compute-0 sudo[204809]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mffeimxlhetljxswygsayuqalptdveai ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398262.9884467-2293-215288100640885/AnsiballZ_stat.py'
Nov 29 06:37:43 compute-0 sudo[204809]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:37:43 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:37:43 compute-0 python3.9[204813]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:37:43 compute-0 sudo[204809]: pam_unix(sudo:session): session closed for user root
Nov 29 06:37:43 compute-0 podman[205026]: 2025-11-29 06:37:43.830646036 +0000 UTC m=+0.050885301 container exec f5b8edcc79df1f136246f04a71d5e10f6a214865dd4162430c1b6090267d988f (image=quay.io/ceph/haproxy:2.3, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-haproxy-rgw-default-compute-0-zzbnoj)
Nov 29 06:37:43 compute-0 sudo[205070]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gwhfnrkmyyyrqronakcafkfwrrahahpr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398262.9884467-2293-215288100640885/AnsiballZ_copy.py'
Nov 29 06:37:43 compute-0 sudo[205070]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:37:43 compute-0 podman[205026]: 2025-11-29 06:37:43.845193959 +0000 UTC m=+0.065433194 container exec_died f5b8edcc79df1f136246f04a71d5e10f6a214865dd4162430c1b6090267d988f (image=quay.io/ceph/haproxy:2.3, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-haproxy-rgw-default-compute-0-zzbnoj)
Nov 29 06:37:43 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:37:43 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:37:43 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:37:43.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:37:44 compute-0 python3.9[205075]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764398262.9884467-2293-215288100640885/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:37:44 compute-0 sudo[205070]: pam_unix(sudo:session): session closed for user root
Nov 29 06:37:44 compute-0 podman[205118]: 2025-11-29 06:37:44.083839688 +0000 UTC m=+0.065549197 container exec c5da9d8380f0eb7ca78841b66eaacc1789ab9c8fb67eaab27657426fdf51bade (image=quay.io/ceph/keepalived:2.2.4, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-keepalived-rgw-default-compute-0-uyqrbs, name=keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, build-date=2023-02-22T09:23:20, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., version=2.2.4, io.buildah.version=1.28.2, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=Ceph keepalived, com.redhat.component=keepalived-container, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, release=1793, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793)
Nov 29 06:37:44 compute-0 podman[205118]: 2025-11-29 06:37:44.129333441 +0000 UTC m=+0.111042940 container exec_died c5da9d8380f0eb7ca78841b66eaacc1789ab9c8fb67eaab27657426fdf51bade (image=quay.io/ceph/keepalived:2.2.4, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-keepalived-rgw-default-compute-0-uyqrbs, distribution-scope=public, version=2.2.4, vcs-type=git, description=keepalived for Ceph, com.redhat.component=keepalived-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., release=1793, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=keepalived, io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2023-02-22T09:23:20)
Nov 29 06:37:44 compute-0 sudo[204257]: pam_unix(sudo:session): session closed for user root
Nov 29 06:37:44 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 06:37:44 compute-0 sudo[205301]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ryjfmwrmgakvczmcfqtxtqudoflmnxcw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398264.1918023-2293-14866083655403/AnsiballZ_stat.py'
Nov 29 06:37:44 compute-0 sudo[205301]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:37:44 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v759: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:37:44 compute-0 python3.9[205303]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:37:44 compute-0 sudo[205301]: pam_unix(sudo:session): session closed for user root
Nov 29 06:37:45 compute-0 sudo[205425]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rvygubcwsyxducsfhmpgnimtipldzoff ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398264.1918023-2293-14866083655403/AnsiballZ_copy.py'
Nov 29 06:37:45 compute-0 sudo[205425]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:37:45 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:37:45 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:37:45 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:37:45.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:37:45 compute-0 python3.9[205427]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764398264.1918023-2293-14866083655403/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:37:45 compute-0 sudo[205425]: pam_unix(sudo:session): session closed for user root
Nov 29 06:37:45 compute-0 ceph-mon[74654]: pgmap v758: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:37:45 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:37:45 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:37:45 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:37:45 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 06:37:45 compute-0 sudo[205577]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mxhsbupxducypujgpbtfcfrzttngkott ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398265.4061613-2293-173779313235720/AnsiballZ_stat.py'
Nov 29 06:37:45 compute-0 sudo[205577]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:37:45 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:37:45 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:37:45 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:37:45.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:37:45 compute-0 python3.9[205579]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:37:45 compute-0 sudo[205577]: pam_unix(sudo:session): session closed for user root
Nov 29 06:37:46 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:37:46 compute-0 sudo[205657]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:37:46 compute-0 sudo[205657]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:37:46 compute-0 sudo[205657]: pam_unix(sudo:session): session closed for user root
Nov 29 06:37:46 compute-0 sudo[205748]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lysnfmplxcvgxleysfufahqjeyfcylht ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398265.4061613-2293-173779313235720/AnsiballZ_copy.py'
Nov 29 06:37:46 compute-0 sudo[205748]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:37:46 compute-0 sudo[205705]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:37:46 compute-0 sudo[205705]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:37:46 compute-0 sudo[205705]: pam_unix(sudo:session): session closed for user root
Nov 29 06:37:46 compute-0 sudo[205753]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:37:46 compute-0 sudo[205753]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:37:46 compute-0 sudo[205753]: pam_unix(sudo:session): session closed for user root
Nov 29 06:37:46 compute-0 sudo[205778]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 06:37:46 compute-0 sudo[205778]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:37:46 compute-0 python3.9[205751]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764398265.4061613-2293-173779313235720/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:37:46 compute-0 sudo[205748]: pam_unix(sudo:session): session closed for user root
Nov 29 06:37:46 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v760: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:37:46 compute-0 sudo[205778]: pam_unix(sudo:session): session closed for user root
Nov 29 06:37:46 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 06:37:46 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:37:46 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 06:37:46 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 06:37:46 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 06:37:47 compute-0 sudo[205986]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-snrkbcozqknkrartwghwhspffxcsdooz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398266.7080464-2293-32035291134779/AnsiballZ_stat.py'
Nov 29 06:37:47 compute-0 sudo[205986]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:37:47 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:37:47 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:37:47 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:37:47.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:37:47 compute-0 python3.9[205988]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:37:47 compute-0 sudo[205986]: pam_unix(sudo:session): session closed for user root
Nov 29 06:37:47 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:37:47 compute-0 ceph-mon[74654]: pgmap v759: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:37:47 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:37:47 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:37:47 compute-0 sudo[206109]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xucmspneyivezdhzzmqakatwhlzcshle ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398266.7080464-2293-32035291134779/AnsiballZ_copy.py'
Nov 29 06:37:47 compute-0 sudo[206109]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:37:47 compute-0 python3.9[206111]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764398266.7080464-2293-32035291134779/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:37:47 compute-0 sudo[206109]: pam_unix(sudo:session): session closed for user root
Nov 29 06:37:47 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:37:47 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:37:47 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:37:47.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:37:47 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:37:47 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev 46568f9b-a8d3-4397-baa9-2f892fa0855f does not exist
Nov 29 06:37:47 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev 827cea6d-ee1d-44fa-b0ea-6a8e28c37f99 does not exist
Nov 29 06:37:47 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev e6172c4b-9c50-4e47-a013-b13fd5e628d0 does not exist
Nov 29 06:37:47 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 06:37:47 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 06:37:47 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 06:37:47 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 06:37:47 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 06:37:47 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:37:48 compute-0 sudo[206136]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:37:48 compute-0 sudo[206136]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:37:48 compute-0 sudo[206136]: pam_unix(sudo:session): session closed for user root
Nov 29 06:37:48 compute-0 sudo[206182]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:37:48 compute-0 sudo[206182]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:37:48 compute-0 sudo[206182]: pam_unix(sudo:session): session closed for user root
Nov 29 06:37:48 compute-0 sudo[206230]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:37:48 compute-0 sudo[206230]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:37:48 compute-0 sudo[206230]: pam_unix(sudo:session): session closed for user root
Nov 29 06:37:48 compute-0 sudo[206263]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Nov 29 06:37:48 compute-0 sudo[206263]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:37:48 compute-0 sudo[206368]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zibqqpwfwoevuevjwrfrtbundfahbxuv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398268.0563-2293-93309710320684/AnsiballZ_stat.py'
Nov 29 06:37:48 compute-0 sudo[206368]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:37:48 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v761: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:37:48 compute-0 podman[206404]: 2025-11-29 06:37:48.608153674 +0000 UTC m=+0.064839657 container create 79a966d83972aa7dfeb7a0558a4f071d8f56e202fdfd9fe9e0a71f0ee608f183 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_antonelli, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 29 06:37:48 compute-0 python3.9[206376]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:37:48 compute-0 sudo[206368]: pam_unix(sudo:session): session closed for user root
Nov 29 06:37:48 compute-0 systemd[1]: Started libpod-conmon-79a966d83972aa7dfeb7a0558a4f071d8f56e202fdfd9fe9e0a71f0ee608f183.scope.
Nov 29 06:37:48 compute-0 podman[206404]: 2025-11-29 06:37:48.568567013 +0000 UTC m=+0.025253006 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:37:48 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:37:48 compute-0 podman[206404]: 2025-11-29 06:37:48.758985659 +0000 UTC m=+0.215671652 container init 79a966d83972aa7dfeb7a0558a4f071d8f56e202fdfd9fe9e0a71f0ee608f183 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_antonelli, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 06:37:48 compute-0 podman[206404]: 2025-11-29 06:37:48.767209918 +0000 UTC m=+0.223895871 container start 79a966d83972aa7dfeb7a0558a4f071d8f56e202fdfd9fe9e0a71f0ee608f183 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_antonelli, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 06:37:48 compute-0 competent_antonelli[206421]: 167 167
Nov 29 06:37:48 compute-0 systemd[1]: libpod-79a966d83972aa7dfeb7a0558a4f071d8f56e202fdfd9fe9e0a71f0ee608f183.scope: Deactivated successfully.
Nov 29 06:37:48 compute-0 sudo[206483]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:37:48 compute-0 sudo[206483]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:37:48 compute-0 sudo[206483]: pam_unix(sudo:session): session closed for user root
Nov 29 06:37:48 compute-0 ceph-mon[74654]: pgmap v760: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:37:48 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:37:48 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 06:37:48 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:37:48 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 06:37:48 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 06:37:48 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:37:48 compute-0 podman[206404]: 2025-11-29 06:37:48.896633691 +0000 UTC m=+0.353319644 container attach 79a966d83972aa7dfeb7a0558a4f071d8f56e202fdfd9fe9e0a71f0ee608f183 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_antonelli, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 06:37:48 compute-0 podman[206404]: 2025-11-29 06:37:48.89795213 +0000 UTC m=+0.354638093 container died 79a966d83972aa7dfeb7a0558a4f071d8f56e202fdfd9fe9e0a71f0ee608f183 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_antonelli, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 06:37:48 compute-0 sudo[206531]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:37:48 compute-0 sudo[206531]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:37:48 compute-0 sudo[206531]: pam_unix(sudo:session): session closed for user root
Nov 29 06:37:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-17289264d4a4e08b437cb91e5fe2283757089590f5f85d1ed7ab1f40e8695725-merged.mount: Deactivated successfully.
Nov 29 06:37:48 compute-0 podman[206404]: 2025-11-29 06:37:48.942363991 +0000 UTC m=+0.399049934 container remove 79a966d83972aa7dfeb7a0558a4f071d8f56e202fdfd9fe9e0a71f0ee608f183 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_antonelli, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 06:37:48 compute-0 systemd[1]: libpod-conmon-79a966d83972aa7dfeb7a0558a4f071d8f56e202fdfd9fe9e0a71f0ee608f183.scope: Deactivated successfully.
Nov 29 06:37:49 compute-0 sudo[206610]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nvqbrryzcarcaxjejvmxadsuqnawhsqr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398268.0563-2293-93309710320684/AnsiballZ_copy.py'
Nov 29 06:37:49 compute-0 sudo[206610]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:37:49 compute-0 podman[206618]: 2025-11-29 06:37:49.098722968 +0000 UTC m=+0.028527601 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:37:49 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:37:49 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:37:49 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:37:49.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:37:49 compute-0 python3.9[206612]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764398268.0563-2293-93309710320684/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:37:49 compute-0 sudo[206610]: pam_unix(sudo:session): session closed for user root
Nov 29 06:37:49 compute-0 podman[206618]: 2025-11-29 06:37:49.252289173 +0000 UTC m=+0.182093766 container create 4d180951dfb94ecd4bb345d831f9ec48a3bfc97870ebe3f2d11cd2914bab5b3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_ride, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 06:37:49 compute-0 systemd[1]: Started libpod-conmon-4d180951dfb94ecd4bb345d831f9ec48a3bfc97870ebe3f2d11cd2914bab5b3a.scope.
Nov 29 06:37:49 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:37:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/246605d7a2abbc2179c1828125592a246b513031e8181d20bf3035a2a6e1158e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 06:37:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/246605d7a2abbc2179c1828125592a246b513031e8181d20bf3035a2a6e1158e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:37:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/246605d7a2abbc2179c1828125592a246b513031e8181d20bf3035a2a6e1158e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:37:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/246605d7a2abbc2179c1828125592a246b513031e8181d20bf3035a2a6e1158e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 06:37:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/246605d7a2abbc2179c1828125592a246b513031e8181d20bf3035a2a6e1158e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 06:37:49 compute-0 podman[206618]: 2025-11-29 06:37:49.416662022 +0000 UTC m=+0.346466625 container init 4d180951dfb94ecd4bb345d831f9ec48a3bfc97870ebe3f2d11cd2914bab5b3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_ride, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 29 06:37:49 compute-0 podman[206618]: 2025-11-29 06:37:49.429965499 +0000 UTC m=+0.359770132 container start 4d180951dfb94ecd4bb345d831f9ec48a3bfc97870ebe3f2d11cd2914bab5b3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_ride, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 29 06:37:49 compute-0 podman[206618]: 2025-11-29 06:37:49.434087999 +0000 UTC m=+0.363892632 container attach 4d180951dfb94ecd4bb345d831f9ec48a3bfc97870ebe3f2d11cd2914bab5b3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_ride, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 29 06:37:49 compute-0 sudo[206789]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dvyarcyswxweldkchhtoxpumpgahriow ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398269.4008389-2293-100870386556352/AnsiballZ_stat.py'
Nov 29 06:37:49 compute-0 sudo[206789]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:37:49 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:37:49 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:37:49 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:37:49.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:37:49 compute-0 python3.9[206791]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:37:49 compute-0 sudo[206789]: pam_unix(sudo:session): session closed for user root
Nov 29 06:37:49 compute-0 ceph-mon[74654]: pgmap v761: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:37:50 compute-0 quirky_ride[206659]: --> passed data devices: 0 physical, 1 LVM
Nov 29 06:37:50 compute-0 quirky_ride[206659]: --> relative data size: 1.0
Nov 29 06:37:50 compute-0 quirky_ride[206659]: --> All data devices are unavailable
Nov 29 06:37:50 compute-0 systemd[1]: libpod-4d180951dfb94ecd4bb345d831f9ec48a3bfc97870ebe3f2d11cd2914bab5b3a.scope: Deactivated successfully.
Nov 29 06:37:50 compute-0 podman[206618]: 2025-11-29 06:37:50.249676085 +0000 UTC m=+1.179480698 container died 4d180951dfb94ecd4bb345d831f9ec48a3bfc97870ebe3f2d11cd2914bab5b3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_ride, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 06:37:50 compute-0 sudo[206932]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ezetpnddypkbfitnhamyuffbuulrkiyw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398269.4008389-2293-100870386556352/AnsiballZ_copy.py'
Nov 29 06:37:50 compute-0 sudo[206932]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:37:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-246605d7a2abbc2179c1828125592a246b513031e8181d20bf3035a2a6e1158e-merged.mount: Deactivated successfully.
Nov 29 06:37:50 compute-0 python3.9[206938]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764398269.4008389-2293-100870386556352/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:37:50 compute-0 sudo[206932]: pam_unix(sudo:session): session closed for user root
Nov 29 06:37:50 compute-0 podman[206618]: 2025-11-29 06:37:50.532644643 +0000 UTC m=+1.462449236 container remove 4d180951dfb94ecd4bb345d831f9ec48a3bfc97870ebe3f2d11cd2914bab5b3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_ride, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 06:37:50 compute-0 systemd[1]: libpod-conmon-4d180951dfb94ecd4bb345d831f9ec48a3bfc97870ebe3f2d11cd2914bab5b3a.scope: Deactivated successfully.
Nov 29 06:37:50 compute-0 sudo[206263]: pam_unix(sudo:session): session closed for user root
Nov 29 06:37:50 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v762: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:37:50 compute-0 sudo[206963]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:37:50 compute-0 sudo[206963]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:37:50 compute-0 sudo[206963]: pam_unix(sudo:session): session closed for user root
Nov 29 06:37:50 compute-0 sudo[206989]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:37:50 compute-0 sudo[206989]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:37:50 compute-0 sudo[206989]: pam_unix(sudo:session): session closed for user root
Nov 29 06:37:50 compute-0 sudo[207040]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:37:50 compute-0 sudo[207040]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:37:50 compute-0 sudo[207040]: pam_unix(sudo:session): session closed for user root
Nov 29 06:37:50 compute-0 sudo[207090]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -- lvm list --format json
Nov 29 06:37:50 compute-0 sudo[207090]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:37:50 compute-0 sudo[207200]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ffkiqbokdgsdkyxhdxgjignwrjqxjmrs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398270.6568413-2293-233160472702351/AnsiballZ_stat.py'
Nov 29 06:37:50 compute-0 sudo[207200]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:37:51 compute-0 python3.9[207206]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:37:51 compute-0 sudo[207200]: pam_unix(sudo:session): session closed for user root
Nov 29 06:37:51 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:37:51 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:37:51 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:37:51.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:37:51 compute-0 podman[207234]: 2025-11-29 06:37:51.186556957 +0000 UTC m=+0.059585534 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:37:51 compute-0 podman[207234]: 2025-11-29 06:37:51.321351617 +0000 UTC m=+0.194380184 container create 08b025f5a7a590f7a778a1c45555f9faaab34efcb725c58e60af7b4db044f34f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_nightingale, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 06:37:51 compute-0 sudo[207365]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-srdqjzqjdauhnmffegvnthkahiorkwdf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398270.6568413-2293-233160472702351/AnsiballZ_copy.py'
Nov 29 06:37:51 compute-0 sudo[207365]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:37:51 compute-0 systemd[1]: Started libpod-conmon-08b025f5a7a590f7a778a1c45555f9faaab34efcb725c58e60af7b4db044f34f.scope.
Nov 29 06:37:51 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:37:51 compute-0 podman[207234]: 2025-11-29 06:37:51.880700111 +0000 UTC m=+0.753728668 container init 08b025f5a7a590f7a778a1c45555f9faaab34efcb725c58e60af7b4db044f34f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_nightingale, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 06:37:51 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:37:51 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:37:51 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:37:51.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:37:51 compute-0 podman[207234]: 2025-11-29 06:37:51.897218491 +0000 UTC m=+0.770247058 container start 08b025f5a7a590f7a778a1c45555f9faaab34efcb725c58e60af7b4db044f34f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_nightingale, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True)
Nov 29 06:37:51 compute-0 python3.9[207367]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764398270.6568413-2293-233160472702351/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:37:51 compute-0 hungry_nightingale[207373]: 167 167
Nov 29 06:37:51 compute-0 systemd[1]: libpod-08b025f5a7a590f7a778a1c45555f9faaab34efcb725c58e60af7b4db044f34f.scope: Deactivated successfully.
Nov 29 06:37:51 compute-0 sudo[207365]: pam_unix(sudo:session): session closed for user root
Nov 29 06:37:51 compute-0 podman[207234]: 2025-11-29 06:37:51.996756626 +0000 UTC m=+0.869785213 container attach 08b025f5a7a590f7a778a1c45555f9faaab34efcb725c58e60af7b4db044f34f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_nightingale, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 06:37:51 compute-0 podman[207234]: 2025-11-29 06:37:51.997383714 +0000 UTC m=+0.870412261 container died 08b025f5a7a590f7a778a1c45555f9faaab34efcb725c58e60af7b4db044f34f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_nightingale, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 29 06:37:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-404472b18cfc44c6f0060a7db91356ab91df3e380621e409c17dd1eaf4a601be-merged.mount: Deactivated successfully.
Nov 29 06:37:52 compute-0 podman[207234]: 2025-11-29 06:37:52.215765224 +0000 UTC m=+1.088793791 container remove 08b025f5a7a590f7a778a1c45555f9faaab34efcb725c58e60af7b4db044f34f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_nightingale, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 06:37:52 compute-0 ceph-mon[74654]: pgmap v762: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:37:52 compute-0 systemd[1]: libpod-conmon-08b025f5a7a590f7a778a1c45555f9faaab34efcb725c58e60af7b4db044f34f.scope: Deactivated successfully.
Nov 29 06:37:52 compute-0 sudo[207557]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jkphawrwcuermmnhqthxipaknbcfsgap ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398272.0888188-2293-54065289805619/AnsiballZ_stat.py'
Nov 29 06:37:52 compute-0 podman[207520]: 2025-11-29 06:37:52.42991534 +0000 UTC m=+0.056918976 container create 58299f21fc74a88c0c3bd1a0a6ef5c549336e148dc37229e196ef5d46af63542 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_shirley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 29 06:37:52 compute-0 sudo[207557]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:37:52 compute-0 systemd[1]: Started libpod-conmon-58299f21fc74a88c0c3bd1a0a6ef5c549336e148dc37229e196ef5d46af63542.scope.
Nov 29 06:37:52 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:37:52 compute-0 podman[207520]: 2025-11-29 06:37:52.414933274 +0000 UTC m=+0.041936930 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:37:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76e42bfda15dd175c9eef91cef0cb4784884c16cce0f3b0d2e878b171c1cfaa0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 06:37:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76e42bfda15dd175c9eef91cef0cb4784884c16cce0f3b0d2e878b171c1cfaa0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:37:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76e42bfda15dd175c9eef91cef0cb4784884c16cce0f3b0d2e878b171c1cfaa0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:37:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76e42bfda15dd175c9eef91cef0cb4784884c16cce0f3b0d2e878b171c1cfaa0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 06:37:52 compute-0 podman[207520]: 2025-11-29 06:37:52.523937784 +0000 UTC m=+0.150941430 container init 58299f21fc74a88c0c3bd1a0a6ef5c549336e148dc37229e196ef5d46af63542 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_shirley, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 06:37:52 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:37:52 compute-0 podman[207520]: 2025-11-29 06:37:52.531854824 +0000 UTC m=+0.158858460 container start 58299f21fc74a88c0c3bd1a0a6ef5c549336e148dc37229e196ef5d46af63542 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_shirley, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 06:37:52 compute-0 podman[207520]: 2025-11-29 06:37:52.535888291 +0000 UTC m=+0.162891957 container attach 58299f21fc74a88c0c3bd1a0a6ef5c549336e148dc37229e196ef5d46af63542 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_shirley, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True)
Nov 29 06:37:52 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v763: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:37:52 compute-0 python3.9[207561]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:37:52 compute-0 sudo[207557]: pam_unix(sudo:session): session closed for user root
Nov 29 06:37:53 compute-0 sudo[207690]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mbnwkoxzidcxfmxphenauomuqgxqafkd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398272.0888188-2293-54065289805619/AnsiballZ_copy.py'
Nov 29 06:37:53 compute-0 sudo[207690]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:37:53 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:37:53 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:37:53 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:37:53.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:37:53 compute-0 python3.9[207692]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764398272.0888188-2293-54065289805619/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:37:53 compute-0 ceph-mon[74654]: pgmap v763: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:37:53 compute-0 sudo[207690]: pam_unix(sudo:session): session closed for user root
Nov 29 06:37:53 compute-0 ecstatic_shirley[207564]: {
Nov 29 06:37:53 compute-0 ecstatic_shirley[207564]:     "1": [
Nov 29 06:37:53 compute-0 ecstatic_shirley[207564]:         {
Nov 29 06:37:53 compute-0 ecstatic_shirley[207564]:             "devices": [
Nov 29 06:37:53 compute-0 ecstatic_shirley[207564]:                 "/dev/loop3"
Nov 29 06:37:53 compute-0 ecstatic_shirley[207564]:             ],
Nov 29 06:37:53 compute-0 ecstatic_shirley[207564]:             "lv_name": "ceph_lv0",
Nov 29 06:37:53 compute-0 ecstatic_shirley[207564]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 06:37:53 compute-0 ecstatic_shirley[207564]:             "lv_size": "7511998464",
Nov 29 06:37:53 compute-0 ecstatic_shirley[207564]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=336ec58c-893b-528f-a0c1-6ed1196bc047,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=91f280f1-e534-4adc-bf70-98711580c2dd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 06:37:53 compute-0 ecstatic_shirley[207564]:             "lv_uuid": "G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP",
Nov 29 06:37:53 compute-0 ecstatic_shirley[207564]:             "name": "ceph_lv0",
Nov 29 06:37:53 compute-0 ecstatic_shirley[207564]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 06:37:53 compute-0 ecstatic_shirley[207564]:             "tags": {
Nov 29 06:37:53 compute-0 ecstatic_shirley[207564]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 06:37:53 compute-0 ecstatic_shirley[207564]:                 "ceph.block_uuid": "G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP",
Nov 29 06:37:53 compute-0 ecstatic_shirley[207564]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 06:37:53 compute-0 ecstatic_shirley[207564]:                 "ceph.cluster_fsid": "336ec58c-893b-528f-a0c1-6ed1196bc047",
Nov 29 06:37:53 compute-0 ecstatic_shirley[207564]:                 "ceph.cluster_name": "ceph",
Nov 29 06:37:53 compute-0 ecstatic_shirley[207564]:                 "ceph.crush_device_class": "",
Nov 29 06:37:53 compute-0 ecstatic_shirley[207564]:                 "ceph.encrypted": "0",
Nov 29 06:37:53 compute-0 ecstatic_shirley[207564]:                 "ceph.osd_fsid": "91f280f1-e534-4adc-bf70-98711580c2dd",
Nov 29 06:37:53 compute-0 ecstatic_shirley[207564]:                 "ceph.osd_id": "1",
Nov 29 06:37:53 compute-0 ecstatic_shirley[207564]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 06:37:53 compute-0 ecstatic_shirley[207564]:                 "ceph.type": "block",
Nov 29 06:37:53 compute-0 ecstatic_shirley[207564]:                 "ceph.vdo": "0"
Nov 29 06:37:53 compute-0 ecstatic_shirley[207564]:             },
Nov 29 06:37:53 compute-0 ecstatic_shirley[207564]:             "type": "block",
Nov 29 06:37:53 compute-0 ecstatic_shirley[207564]:             "vg_name": "ceph_vg0"
Nov 29 06:37:53 compute-0 ecstatic_shirley[207564]:         }
Nov 29 06:37:53 compute-0 ecstatic_shirley[207564]:     ]
Nov 29 06:37:53 compute-0 ecstatic_shirley[207564]: }
Nov 29 06:37:53 compute-0 systemd[1]: libpod-58299f21fc74a88c0c3bd1a0a6ef5c549336e148dc37229e196ef5d46af63542.scope: Deactivated successfully.
Nov 29 06:37:53 compute-0 podman[207697]: 2025-11-29 06:37:53.410207705 +0000 UTC m=+0.027280094 container died 58299f21fc74a88c0c3bd1a0a6ef5c549336e148dc37229e196ef5d46af63542 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_shirley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 29 06:37:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-76e42bfda15dd175c9eef91cef0cb4784884c16cce0f3b0d2e878b171c1cfaa0-merged.mount: Deactivated successfully.
Nov 29 06:37:53 compute-0 podman[207697]: 2025-11-29 06:37:53.804660104 +0000 UTC m=+0.421732473 container remove 58299f21fc74a88c0c3bd1a0a6ef5c549336e148dc37229e196ef5d46af63542 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_shirley, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef)
Nov 29 06:37:53 compute-0 systemd[1]: libpod-conmon-58299f21fc74a88c0c3bd1a0a6ef5c549336e148dc37229e196ef5d46af63542.scope: Deactivated successfully.
Nov 29 06:37:53 compute-0 sudo[207090]: pam_unix(sudo:session): session closed for user root
Nov 29 06:37:53 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:37:53 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:37:53 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:37:53.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:37:53 compute-0 sudo[207811]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:37:53 compute-0 sudo[207811]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:37:53 compute-0 sudo[207811]: pam_unix(sudo:session): session closed for user root
Nov 29 06:37:53 compute-0 sudo[207854]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:37:53 compute-0 sudo[207854]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:37:53 compute-0 sudo[207854]: pam_unix(sudo:session): session closed for user root
Nov 29 06:37:53 compute-0 sudo[207918]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oqcvcneucghetsxkypfzgzhyikxpehyj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398273.7118747-2293-42781858481385/AnsiballZ_stat.py'
Nov 29 06:37:53 compute-0 sudo[207918]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:37:54 compute-0 sudo[207907]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:37:54 compute-0 sudo[207907]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:37:54 compute-0 sudo[207907]: pam_unix(sudo:session): session closed for user root
Nov 29 06:37:54 compute-0 sudo[207939]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -- raw list --format json
Nov 29 06:37:54 compute-0 sudo[207939]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:37:54 compute-0 ceph-mgr[74948]: [balancer INFO root] Optimize plan auto_2025-11-29_06:37:54
Nov 29 06:37:54 compute-0 ceph-mgr[74948]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 06:37:54 compute-0 ceph-mgr[74948]: [balancer INFO root] do_upmap
Nov 29 06:37:54 compute-0 ceph-mgr[74948]: [balancer INFO root] pools ['default.rgw.log', 'cephfs.cephfs.data', '.mgr', 'volumes', '.rgw.root', 'backups', 'default.rgw.control', 'default.rgw.meta', 'vms', 'cephfs.cephfs.meta', 'images']
Nov 29 06:37:54 compute-0 ceph-mgr[74948]: [balancer INFO root] prepared 0/10 changes
Nov 29 06:37:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:37:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:37:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:37:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:37:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:37:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:37:54 compute-0 python3.9[207936]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:37:54 compute-0 sudo[207918]: pam_unix(sudo:session): session closed for user root
Nov 29 06:37:54 compute-0 podman[208006]: 2025-11-29 06:37:54.430198314 +0000 UTC m=+0.028702806 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:37:54 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v764: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:37:54 compute-0 podman[208006]: 2025-11-29 06:37:54.626476451 +0000 UTC m=+0.224980923 container create 5268dec2797fbcd5222111041f9c90220974158fda30ed810dd2cb6d3e6d6855 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_bouman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 29 06:37:54 compute-0 sudo[208140]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nodmtaeedmqbfehieijjvxknsjosrdqj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398273.7118747-2293-42781858481385/AnsiballZ_copy.py'
Nov 29 06:37:54 compute-0 sudo[208140]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:37:55 compute-0 python3.9[208142]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764398273.7118747-2293-42781858481385/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:37:55 compute-0 sudo[208140]: pam_unix(sudo:session): session closed for user root
Nov 29 06:37:55 compute-0 systemd[1]: Started libpod-conmon-5268dec2797fbcd5222111041f9c90220974158fda30ed810dd2cb6d3e6d6855.scope.
Nov 29 06:37:55 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:37:55 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:37:55 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:37:55.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:37:55 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:37:55 compute-0 podman[208006]: 2025-11-29 06:37:55.546535814 +0000 UTC m=+1.145040296 container init 5268dec2797fbcd5222111041f9c90220974158fda30ed810dd2cb6d3e6d6855 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_bouman, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 06:37:55 compute-0 sudo[208298]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vfsfodujcghaegfpsflcffjwdruchmil ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398275.2703915-2293-243300696722071/AnsiballZ_stat.py'
Nov 29 06:37:55 compute-0 sudo[208298]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:37:55 compute-0 podman[208006]: 2025-11-29 06:37:55.553333632 +0000 UTC m=+1.151838094 container start 5268dec2797fbcd5222111041f9c90220974158fda30ed810dd2cb6d3e6d6855 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_bouman, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 29 06:37:55 compute-0 competent_bouman[208169]: 167 167
Nov 29 06:37:55 compute-0 systemd[1]: libpod-5268dec2797fbcd5222111041f9c90220974158fda30ed810dd2cb6d3e6d6855.scope: Deactivated successfully.
Nov 29 06:37:55 compute-0 sshd-session[208004]: Invalid user usuario1 from 103.147.159.91 port 54064
Nov 29 06:37:55 compute-0 podman[208006]: 2025-11-29 06:37:55.79193195 +0000 UTC m=+1.390436402 container attach 5268dec2797fbcd5222111041f9c90220974158fda30ed810dd2cb6d3e6d6855 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_bouman, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 06:37:55 compute-0 podman[208006]: 2025-11-29 06:37:55.792327711 +0000 UTC m=+1.390832163 container died 5268dec2797fbcd5222111041f9c90220974158fda30ed810dd2cb6d3e6d6855 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_bouman, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 06:37:55 compute-0 python3.9[208301]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:37:55 compute-0 sudo[208298]: pam_unix(sudo:session): session closed for user root
Nov 29 06:37:55 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:37:55 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:37:55 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:37:55.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:37:55 compute-0 sshd-session[208004]: Received disconnect from 103.147.159.91 port 54064:11: Bye Bye [preauth]
Nov 29 06:37:55 compute-0 sshd-session[208004]: Disconnected from invalid user usuario1 103.147.159.91 port 54064 [preauth]
Nov 29 06:37:56 compute-0 sudo[208434]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xabbjtgkkoelzqzzpmbsxchzuxidyfky ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398275.2703915-2293-243300696722071/AnsiballZ_copy.py'
Nov 29 06:37:56 compute-0 sudo[208434]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:37:56 compute-0 python3.9[208436]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764398275.2703915-2293-243300696722071/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:37:56 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v765: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:37:56 compute-0 sudo[208434]: pam_unix(sudo:session): session closed for user root
Nov 29 06:37:57 compute-0 sudo[208587]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cqmqpmifmwfilzkpoqfzxemmsjinhljx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398276.7939715-2293-81764984725639/AnsiballZ_stat.py'
Nov 29 06:37:57 compute-0 sudo[208587]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:37:57 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:37:57 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:37:57 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:37:57.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:37:57 compute-0 python3.9[208589]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:37:57 compute-0 sudo[208587]: pam_unix(sudo:session): session closed for user root
Nov 29 06:37:57 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:37:57 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:37:57 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000030s ======
Nov 29 06:37:57 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:37:57.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Nov 29 06:37:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-2e6b22f195a221270defbf517d7aaec0766c79bdee16f52e29357dd48131b6fa-merged.mount: Deactivated successfully.
Nov 29 06:37:58 compute-0 ceph-mon[74654]: pgmap v764: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:37:58 compute-0 sudo[208713]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ciqbjvflyugvzrivtpuunddjkiozmhvx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398276.7939715-2293-81764984725639/AnsiballZ_copy.py'
Nov 29 06:37:58 compute-0 sudo[208713]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:37:58 compute-0 python3.9[208715]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764398276.7939715-2293-81764984725639/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:37:58 compute-0 sudo[208713]: pam_unix(sudo:session): session closed for user root
Nov 29 06:37:58 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v766: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:37:58 compute-0 sudo[208865]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qlrmmzrzrlrowftgyfiwziqhnfyadfup ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398278.4193048-2293-109820329254071/AnsiballZ_stat.py'
Nov 29 06:37:58 compute-0 sudo[208865]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:37:58 compute-0 sshd-session[208621]: Invalid user mcserver from 118.193.39.127 port 58502
Nov 29 06:37:59 compute-0 sshd-session[208621]: Received disconnect from 118.193.39.127 port 58502:11: Bye Bye [preauth]
Nov 29 06:37:59 compute-0 sshd-session[208621]: Disconnected from invalid user mcserver 118.193.39.127 port 58502 [preauth]
Nov 29 06:37:59 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:37:59 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:37:59 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:37:59.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:37:59 compute-0 python3.9[208867]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:37:59 compute-0 sudo[208865]: pam_unix(sudo:session): session closed for user root
Nov 29 06:37:59 compute-0 sudo[208989]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-slfglppnwvaoqbayyckytkccxqbexysr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398278.4193048-2293-109820329254071/AnsiballZ_copy.py'
Nov 29 06:37:59 compute-0 sudo[208989]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:37:59 compute-0 python3.9[208991]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764398278.4193048-2293-109820329254071/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:37:59 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:37:59 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:37:59 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:37:59.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:37:59 compute-0 sudo[208989]: pam_unix(sudo:session): session closed for user root
Nov 29 06:38:00 compute-0 podman[208006]: 2025-11-29 06:38:00.340641924 +0000 UTC m=+5.939146366 container remove 5268dec2797fbcd5222111041f9c90220974158fda30ed810dd2cb6d3e6d6855 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_bouman, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 29 06:38:00 compute-0 ceph-mon[74654]: pgmap v765: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:38:00 compute-0 sudo[209141]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-atfwjihzrevuofmcromfiqgoprwijzin ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398280.0263493-2293-122910881362166/AnsiballZ_stat.py'
Nov 29 06:38:00 compute-0 sudo[209141]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:38:00 compute-0 systemd[1]: libpod-conmon-5268dec2797fbcd5222111041f9c90220974158fda30ed810dd2cb6d3e6d6855.scope: Deactivated successfully.
Nov 29 06:38:00 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v767: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:38:00 compute-0 python3.9[209145]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:38:00 compute-0 sudo[209141]: pam_unix(sudo:session): session closed for user root
Nov 29 06:38:00 compute-0 podman[209151]: 2025-11-29 06:38:00.517742544 +0000 UTC m=+0.028396836 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:38:00 compute-0 podman[209151]: 2025-11-29 06:38:00.734141857 +0000 UTC m=+0.244796089 container create c7bd979705f5c466b4201e325d6b23ad6a122c2d55503a73f74208ee5efb153a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_babbage, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 06:38:00 compute-0 systemd[1]: Started libpod-conmon-c7bd979705f5c466b4201e325d6b23ad6a122c2d55503a73f74208ee5efb153a.scope.
Nov 29 06:38:00 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:38:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d93f676b82c340fd03f8fcbb455e2f4d585579625486f8e9fb4be90684d79f47/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 06:38:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d93f676b82c340fd03f8fcbb455e2f4d585579625486f8e9fb4be90684d79f47/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:38:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d93f676b82c340fd03f8fcbb455e2f4d585579625486f8e9fb4be90684d79f47/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:38:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d93f676b82c340fd03f8fcbb455e2f4d585579625486f8e9fb4be90684d79f47/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 06:38:00 compute-0 podman[209151]: 2025-11-29 06:38:00.887936288 +0000 UTC m=+0.398590510 container init c7bd979705f5c466b4201e325d6b23ad6a122c2d55503a73f74208ee5efb153a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_babbage, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 29 06:38:00 compute-0 podman[209151]: 2025-11-29 06:38:00.901319968 +0000 UTC m=+0.411974180 container start c7bd979705f5c466b4201e325d6b23ad6a122c2d55503a73f74208ee5efb153a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_babbage, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True)
Nov 29 06:38:00 compute-0 podman[209151]: 2025-11-29 06:38:00.905030765 +0000 UTC m=+0.415685057 container attach c7bd979705f5c466b4201e325d6b23ad6a122c2d55503a73f74208ee5efb153a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_babbage, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 06:38:01 compute-0 sudo[209294]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cdjiontbalzfxfrtpxhpnueggbyhbxug ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398280.0263493-2293-122910881362166/AnsiballZ_copy.py'
Nov 29 06:38:01 compute-0 sudo[209294]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:38:01 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:38:01 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:38:01 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:38:01.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:38:01 compute-0 python3.9[209296]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764398280.0263493-2293-122910881362166/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:38:01 compute-0 sudo[209294]: pam_unix(sudo:session): session closed for user root
Nov 29 06:38:01 compute-0 tender_babbage[209219]: {
Nov 29 06:38:01 compute-0 tender_babbage[209219]:     "91f280f1-e534-4adc-bf70-98711580c2dd": {
Nov 29 06:38:01 compute-0 tender_babbage[209219]:         "ceph_fsid": "336ec58c-893b-528f-a0c1-6ed1196bc047",
Nov 29 06:38:01 compute-0 tender_babbage[209219]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 06:38:01 compute-0 tender_babbage[209219]:         "osd_id": 1,
Nov 29 06:38:01 compute-0 tender_babbage[209219]:         "osd_uuid": "91f280f1-e534-4adc-bf70-98711580c2dd",
Nov 29 06:38:01 compute-0 tender_babbage[209219]:         "type": "bluestore"
Nov 29 06:38:01 compute-0 tender_babbage[209219]:     }
Nov 29 06:38:01 compute-0 tender_babbage[209219]: }
Nov 29 06:38:01 compute-0 systemd[1]: libpod-c7bd979705f5c466b4201e325d6b23ad6a122c2d55503a73f74208ee5efb153a.scope: Deactivated successfully.
Nov 29 06:38:01 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:38:01 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:38:01 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:38:01.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:38:01 compute-0 podman[209436]: 2025-11-29 06:38:01.966633035 +0000 UTC m=+0.049965484 container died c7bd979705f5c466b4201e325d6b23ad6a122c2d55503a73f74208ee5efb153a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_babbage, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 06:38:02 compute-0 python3.9[209475]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ls -lRZ /run/libvirt | grep -E ':container_\S+_t'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:38:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-d93f676b82c340fd03f8fcbb455e2f4d585579625486f8e9fb4be90684d79f47-merged.mount: Deactivated successfully.
Nov 29 06:38:02 compute-0 podman[209436]: 2025-11-29 06:38:02.495623437 +0000 UTC m=+0.578955826 container remove c7bd979705f5c466b4201e325d6b23ad6a122c2d55503a73f74208ee5efb153a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_babbage, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 29 06:38:02 compute-0 systemd[1]: libpod-conmon-c7bd979705f5c466b4201e325d6b23ad6a122c2d55503a73f74208ee5efb153a.scope: Deactivated successfully.
Nov 29 06:38:02 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:38:02 compute-0 sudo[207939]: pam_unix(sudo:session): session closed for user root
Nov 29 06:38:02 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 06:38:02 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v768: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:38:02 compute-0 podman[209481]: 2025-11-29 06:38:02.590214697 +0000 UTC m=+0.247859478 container health_status 81ea2bcb89266a0110a379c2083d8cc042460d4a35c8ed3bf349dd1083925000 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 06:38:02 compute-0 podman[209482]: 2025-11-29 06:38:02.619241121 +0000 UTC m=+0.277531681 container health_status b3f42e9a710907b47913576d27471d163da731262c1464357cff24681ce600c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true)
Nov 29 06:38:03 compute-0 sudo[209673]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pvjbmcxvmdhhpsekofasvnnxnlegyhpf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398282.5858195-2911-88071273786705/AnsiballZ_seboolean.py'
Nov 29 06:38:03 compute-0 sudo[209673]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:38:03 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:38:03 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:38:03 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:38:03.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:38:03 compute-0 python3.9[209675]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Nov 29 06:38:03 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:38:03 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:38:03 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:38:03.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:38:04 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v769: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:38:05 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:38:05 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:38:05 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:38:05.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:38:05 compute-0 ceph-mon[74654]: pgmap v766: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:38:05 compute-0 ceph-mon[74654]: pgmap v767: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:38:05 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:38:05 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:38:05 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:38:05.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:38:06 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:38:06 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 06:38:06 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v770: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:38:06 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:38:06 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev 44377ac0-a2d7-4050-a55a-b2b0f3957b55 does not exist
Nov 29 06:38:06 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev 55ae6183-02c1-49f6-92b1-b2971b18711e does not exist
Nov 29 06:38:06 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev a0886498-4329-440a-8592-04949dcdb8b2 does not exist
Nov 29 06:38:06 compute-0 sudo[209682]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:38:06 compute-0 dbus-broker-launch[778]: avc:  op=load_policy lsm=selinux seqno=15 res=1
Nov 29 06:38:06 compute-0 sudo[209682]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:38:06 compute-0 sudo[209682]: pam_unix(sudo:session): session closed for user root
Nov 29 06:38:07 compute-0 sudo[209707]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 06:38:07 compute-0 sudo[209707]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:38:07 compute-0 sudo[209707]: pam_unix(sudo:session): session closed for user root
Nov 29 06:38:07 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:38:07 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:38:07 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:38:07.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:38:07 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:38:07 compute-0 ceph-mon[74654]: pgmap v768: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:38:07 compute-0 ceph-mon[74654]: pgmap v769: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:38:07 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:38:07 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:38:07 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:38:07 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:38:07.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:38:07 compute-0 sudo[209673]: pam_unix(sudo:session): session closed for user root
Nov 29 06:38:08 compute-0 sudo[209881]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qxliqlavfhjxchpmmyvurgppiilxxfac ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398288.1457565-2935-125954314981101/AnsiballZ_copy.py'
Nov 29 06:38:08 compute-0 sudo[209881]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:38:08 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v771: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:38:08 compute-0 python3.9[209883]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:38:08 compute-0 sudo[209881]: pam_unix(sudo:session): session closed for user root
Nov 29 06:38:09 compute-0 sudo[209961]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:38:09 compute-0 sudo[209961]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:38:09 compute-0 sudo[209961]: pam_unix(sudo:session): session closed for user root
Nov 29 06:38:09 compute-0 sudo[210009]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:38:09 compute-0 sudo[210009]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:38:09 compute-0 sudo[210009]: pam_unix(sudo:session): session closed for user root
Nov 29 06:38:09 compute-0 sudo[210084]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-obblgutovqvfriwoiwhewceqyemekeqd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398288.8526337-2935-89044071818151/AnsiballZ_copy.py'
Nov 29 06:38:09 compute-0 sudo[210084]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:38:09 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:38:09 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:38:09 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:38:09.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:38:09 compute-0 python3.9[210086]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:38:09 compute-0 sudo[210084]: pam_unix(sudo:session): session closed for user root
Nov 29 06:38:09 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:38:09 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:38:09 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:38:09.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:38:10 compute-0 sudo[210236]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nmxavzqpqegsseibbqnntcosnoyupdad ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398289.6525428-2935-89647431185311/AnsiballZ_copy.py'
Nov 29 06:38:10 compute-0 sudo[210236]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:38:10 compute-0 python3.9[210238]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:38:10 compute-0 sudo[210236]: pam_unix(sudo:session): session closed for user root
Nov 29 06:38:10 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v772: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:38:10 compute-0 sudo[210390]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzyhqjxefraldhucneklvoopmqauquxo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398290.5044746-2935-275001113785129/AnsiballZ_copy.py'
Nov 29 06:38:10 compute-0 sudo[210390]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:38:11 compute-0 python3.9[210392]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:38:11 compute-0 sudo[210390]: pam_unix(sudo:session): session closed for user root
Nov 29 06:38:11 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:38:11 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:38:11 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:38:11.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:38:11 compute-0 sshd-session[210345]: Invalid user es from 176.109.67.96 port 42242
Nov 29 06:38:11 compute-0 ceph-mon[74654]: pgmap v770: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:38:11 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:38:11 compute-0 sudo[210543]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qlpbjyzfotsgzpuyeirqiotntgcweqhh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398291.2273288-2935-43953492072377/AnsiballZ_copy.py'
Nov 29 06:38:11 compute-0 sudo[210543]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:38:11 compute-0 sshd-session[210345]: Received disconnect from 176.109.67.96 port 42242:11: Bye Bye [preauth]
Nov 29 06:38:11 compute-0 sshd-session[210345]: Disconnected from invalid user es 176.109.67.96 port 42242 [preauth]
Nov 29 06:38:11 compute-0 python3.9[210545]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:38:11 compute-0 sudo[210543]: pam_unix(sudo:session): session closed for user root
Nov 29 06:38:11 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:38:11 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000030s ======
Nov 29 06:38:11 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:38:11.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Nov 29 06:38:12 compute-0 sudo[210695]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjvtazbxvihzvfrefdoypewyltqdjurq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398291.9679632-3043-123450162708043/AnsiballZ_copy.py'
Nov 29 06:38:12 compute-0 sudo[210695]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:38:12 compute-0 python3.9[210697]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:38:12 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:38:12 compute-0 sudo[210695]: pam_unix(sudo:session): session closed for user root
Nov 29 06:38:12 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v773: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:38:12 compute-0 ceph-mon[74654]: pgmap v771: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:38:12 compute-0 ceph-mon[74654]: pgmap v772: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:38:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 06:38:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:38:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 06:38:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:38:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:38:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:38:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:38:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:38:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:38:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:38:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:38:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:38:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 06:38:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:38:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:38:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:38:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Nov 29 06:38:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:38:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 06:38:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:38:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:38:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:38:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 06:38:13 compute-0 sudo[210848]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sbvwpvkbuxqxnjkserczzxqteyzcsbmk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398292.699186-3043-249160618203115/AnsiballZ_copy.py'
Nov 29 06:38:13 compute-0 sudo[210848]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:38:13 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:38:13 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:38:13 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:38:13.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:38:13 compute-0 python3.9[210850]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:38:13 compute-0 sudo[210848]: pam_unix(sudo:session): session closed for user root
Nov 29 06:38:13 compute-0 sudo[211000]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwqyfzghfmehlzulcdcunvwuwvgbetwl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398293.4142663-3043-9112042104705/AnsiballZ_copy.py'
Nov 29 06:38:13 compute-0 sudo[211000]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:38:13 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:38:13 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:38:13 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:38:13.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:38:13 compute-0 python3.9[211002]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:38:13 compute-0 sudo[211000]: pam_unix(sudo:session): session closed for user root
Nov 29 06:38:14 compute-0 ceph-mon[74654]: pgmap v773: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:38:14 compute-0 sudo[211152]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-arnwaqgfssbubxfuqqpgswzostzwylci ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398294.1456246-3043-54885607163664/AnsiballZ_copy.py'
Nov 29 06:38:14 compute-0 sudo[211152]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:38:14 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v774: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:38:14 compute-0 python3.9[211154]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:38:14 compute-0 sudo[211152]: pam_unix(sudo:session): session closed for user root
Nov 29 06:38:15 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:38:15 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:38:15 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:38:15.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:38:15 compute-0 sudo[211305]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gpeffqmmyxmgwekeffeihoknkwwosuac ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398294.9450514-3043-243128673804207/AnsiballZ_copy.py'
Nov 29 06:38:15 compute-0 sudo[211305]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:38:15 compute-0 python3.9[211307]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:38:15 compute-0 sudo[211305]: pam_unix(sudo:session): session closed for user root
Nov 29 06:38:15 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:38:15 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:38:15 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:38:15.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:38:16 compute-0 ceph-mon[74654]: pgmap v774: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:38:16 compute-0 sudo[211459]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-goyodlaavaqnmcexzjqausptuhzpgibj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398295.6964343-3151-46366561578230/AnsiballZ_systemd.py'
Nov 29 06:38:16 compute-0 sudo[211459]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:38:16 compute-0 python3.9[211461]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 06:38:16 compute-0 systemd[1]: Reloading.
Nov 29 06:38:16 compute-0 systemd-sysv-generator[211491]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 06:38:16 compute-0 systemd-rc-local-generator[211487]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 06:38:16 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v775: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:38:16 compute-0 sshd-session[211308]: Invalid user hello from 49.247.35.31 port 31093
Nov 29 06:38:16 compute-0 systemd[1]: Starting libvirt logging daemon socket...
Nov 29 06:38:16 compute-0 systemd[1]: Listening on libvirt logging daemon socket.
Nov 29 06:38:16 compute-0 systemd[1]: Starting libvirt logging daemon admin socket...
Nov 29 06:38:16 compute-0 systemd[1]: Listening on libvirt logging daemon admin socket.
Nov 29 06:38:16 compute-0 systemd[1]: Starting libvirt logging daemon...
Nov 29 06:38:16 compute-0 systemd[1]: Started libvirt logging daemon.
Nov 29 06:38:16 compute-0 sudo[211459]: pam_unix(sudo:session): session closed for user root
Nov 29 06:38:16 compute-0 sshd-session[211308]: Received disconnect from 49.247.35.31 port 31093:11: Bye Bye [preauth]
Nov 29 06:38:16 compute-0 sshd-session[211308]: Disconnected from invalid user hello 49.247.35.31 port 31093 [preauth]
Nov 29 06:38:17 compute-0 ceph-mon[74654]: pgmap v775: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:38:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:38:17.224 157767 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 06:38:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:38:17.226 157767 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 06:38:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:38:17.226 157767 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 06:38:17 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:38:17 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:38:17 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:38:17.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:38:17 compute-0 sudo[211653]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wxrcpsergqkmkfdxmkcmmnincbzzevsk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398296.960939-3151-239532585351920/AnsiballZ_systemd.py'
Nov 29 06:38:17 compute-0 sudo[211653]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:38:17 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:38:17 compute-0 python3.9[211655]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 06:38:17 compute-0 systemd[1]: Reloading.
Nov 29 06:38:17 compute-0 systemd-rc-local-generator[211683]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 06:38:17 compute-0 systemd-sysv-generator[211687]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 06:38:17 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:38:17 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:38:17 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:38:17.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:38:17 compute-0 systemd[1]: Starting libvirt nodedev daemon socket...
Nov 29 06:38:17 compute-0 systemd[1]: Listening on libvirt nodedev daemon socket.
Nov 29 06:38:17 compute-0 systemd[1]: Starting libvirt nodedev daemon admin socket...
Nov 29 06:38:17 compute-0 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Nov 29 06:38:17 compute-0 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Nov 29 06:38:17 compute-0 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Nov 29 06:38:17 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Nov 29 06:38:17 compute-0 systemd[1]: Started libvirt nodedev daemon.
Nov 29 06:38:18 compute-0 sudo[211653]: pam_unix(sudo:session): session closed for user root
Nov 29 06:38:18 compute-0 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Nov 29 06:38:18 compute-0 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Nov 29 06:38:18 compute-0 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Nov 29 06:38:18 compute-0 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Nov 29 06:38:18 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v776: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:38:18 compute-0 sudo[211880]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aqkouctgtldpwpzlpbqtuuookjkszlip ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398298.205696-3151-213873115829923/AnsiballZ_systemd.py'
Nov 29 06:38:18 compute-0 sudo[211880]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:38:19 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:38:19 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:38:19 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:38:19.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:38:19 compute-0 python3.9[211882]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 06:38:19 compute-0 systemd[1]: Reloading.
Nov 29 06:38:19 compute-0 systemd-sysv-generator[211916]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 06:38:19 compute-0 systemd-rc-local-generator[211913]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 06:38:19 compute-0 setroubleshoot[211718]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l f4bda634-c859-4153-984e-4815756e6df6
Nov 29 06:38:19 compute-0 setroubleshoot[211718]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Nov 29 06:38:19 compute-0 setroubleshoot[211718]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l f4bda634-c859-4153-984e-4815756e6df6
Nov 29 06:38:19 compute-0 setroubleshoot[211718]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Nov 29 06:38:19 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:38:19 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:38:19 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:38:19.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:38:20 compute-0 systemd[1]: Starting libvirt proxy daemon admin socket...
Nov 29 06:38:20 compute-0 systemd[1]: Starting libvirt proxy daemon read-only socket...
Nov 29 06:38:20 compute-0 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Nov 29 06:38:20 compute-0 systemd[1]: Listening on libvirt proxy daemon admin socket.
Nov 29 06:38:20 compute-0 systemd[1]: Starting libvirt proxy daemon...
Nov 29 06:38:20 compute-0 ceph-mon[74654]: pgmap v776: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:38:20 compute-0 systemd[1]: Started libvirt proxy daemon.
Nov 29 06:38:20 compute-0 sudo[211880]: pam_unix(sudo:session): session closed for user root
Nov 29 06:38:20 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v777: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:38:20 compute-0 sshd-session[211884]: Invalid user superset from 103.31.39.143 port 56064
Nov 29 06:38:21 compute-0 sudo[212095]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oztizcdthuyowvzqzvdnqwpmfxltmojq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398300.6879973-3151-139580170395724/AnsiballZ_systemd.py'
Nov 29 06:38:21 compute-0 sudo[212095]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:38:21 compute-0 sshd-session[211884]: Received disconnect from 103.31.39.143 port 56064:11: Bye Bye [preauth]
Nov 29 06:38:21 compute-0 sshd-session[211884]: Disconnected from invalid user superset 103.31.39.143 port 56064 [preauth]
Nov 29 06:38:21 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:38:21 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:38:21 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:38:21.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:38:21 compute-0 python3.9[212097]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 06:38:21 compute-0 systemd[1]: Reloading.
Nov 29 06:38:21 compute-0 systemd-sysv-generator[212126]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 06:38:21 compute-0 systemd-rc-local-generator[212122]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 06:38:21 compute-0 ceph-mon[74654]: pgmap v777: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:38:21 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:38:21 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:38:21 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:38:21.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:38:22 compute-0 systemd[1]: Listening on libvirt locking daemon socket.
Nov 29 06:38:22 compute-0 systemd[1]: Starting libvirt QEMU daemon socket...
Nov 29 06:38:22 compute-0 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Nov 29 06:38:22 compute-0 systemd[1]: Starting Virtual Machine and Container Registration Service...
Nov 29 06:38:22 compute-0 systemd[1]: Listening on libvirt QEMU daemon socket.
Nov 29 06:38:22 compute-0 systemd[1]: Starting libvirt QEMU daemon admin socket...
Nov 29 06:38:22 compute-0 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Nov 29 06:38:22 compute-0 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Nov 29 06:38:22 compute-0 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Nov 29 06:38:22 compute-0 systemd[1]: Started Virtual Machine and Container Registration Service.
Nov 29 06:38:22 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Nov 29 06:38:22 compute-0 systemd[1]: Started libvirt QEMU daemon.
Nov 29 06:38:22 compute-0 sudo[212095]: pam_unix(sudo:session): session closed for user root
Nov 29 06:38:22 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:38:22 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v778: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:38:22 compute-0 sudo[212310]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sujsrruzlryqmfvnjvqvwgrcjgmkllhx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398302.3367207-3151-247388842158749/AnsiballZ_systemd.py'
Nov 29 06:38:22 compute-0 sudo[212310]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:38:23 compute-0 python3.9[212312]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 06:38:23 compute-0 systemd[1]: Reloading.
Nov 29 06:38:23 compute-0 systemd-sysv-generator[212346]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 06:38:23 compute-0 systemd-rc-local-generator[212342]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 06:38:23 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:38:23 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:38:23 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:38:23.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:38:23 compute-0 systemd[1]: Starting libvirt secret daemon socket...
Nov 29 06:38:23 compute-0 systemd[1]: Listening on libvirt secret daemon socket.
Nov 29 06:38:23 compute-0 systemd[1]: Starting libvirt secret daemon admin socket...
Nov 29 06:38:23 compute-0 systemd[1]: Starting libvirt secret daemon read-only socket...
Nov 29 06:38:23 compute-0 systemd[1]: Listening on libvirt secret daemon read-only socket.
Nov 29 06:38:23 compute-0 systemd[1]: Listening on libvirt secret daemon admin socket.
Nov 29 06:38:23 compute-0 systemd[1]: Starting libvirt secret daemon...
Nov 29 06:38:23 compute-0 systemd[1]: Started libvirt secret daemon.
Nov 29 06:38:23 compute-0 sudo[212310]: pam_unix(sudo:session): session closed for user root
Nov 29 06:38:23 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:38:23 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:38:23 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:38:23.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:38:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:38:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:38:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:38:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:38:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:38:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:38:24 compute-0 ceph-mon[74654]: pgmap v778: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:38:24 compute-0 sudo[212523]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqovbecxksaniqqmteicgbefmojmqpcg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398304.1270368-3262-13986177016363/AnsiballZ_file.py'
Nov 29 06:38:24 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v779: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:38:24 compute-0 sudo[212523]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:38:24 compute-0 python3.9[212525]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:38:24 compute-0 sudo[212523]: pam_unix(sudo:session): session closed for user root
Nov 29 06:38:25 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:38:25 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:38:25 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:38:25.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:38:25 compute-0 sudo[212676]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vsvlaywphjqjezbtpgeefbxsnrtlufkt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398305.197723-3286-226861776302884/AnsiballZ_find.py'
Nov 29 06:38:25 compute-0 sudo[212676]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:38:25 compute-0 python3.9[212678]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 29 06:38:25 compute-0 sudo[212676]: pam_unix(sudo:session): session closed for user root
Nov 29 06:38:25 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:38:25 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.003000087s ======
Nov 29 06:38:25 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:38:25.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000087s
Nov 29 06:38:26 compute-0 sudo[212828]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cdqkkamhcmblyibgjlhvigcmhwnhagux ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398306.111551-3310-223308604738672/AnsiballZ_command.py'
Nov 29 06:38:26 compute-0 sudo[212828]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:38:26 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v780: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:38:26 compute-0 ceph-mon[74654]: pgmap v779: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:38:26 compute-0 python3.9[212830]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;
                                             echo ceph
                                             awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:38:26 compute-0 sudo[212828]: pam_unix(sudo:session): session closed for user root
Nov 29 06:38:27 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:38:27 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:38:27 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:38:27.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:38:27 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:38:27 compute-0 ceph-mon[74654]: pgmap v780: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:38:27 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:38:27 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:38:27 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:38:27.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:38:28 compute-0 sshd-session[212935]: Invalid user student1 from 162.214.92.14 port 55618
Nov 29 06:38:28 compute-0 python3.9[212987]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 29 06:38:28 compute-0 sshd-session[212935]: Received disconnect from 162.214.92.14 port 55618:11: Bye Bye [preauth]
Nov 29 06:38:28 compute-0 sshd-session[212935]: Disconnected from invalid user student1 162.214.92.14 port 55618 [preauth]
Nov 29 06:38:28 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v781: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:38:29 compute-0 sudo[213118]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:38:29 compute-0 sudo[213118]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:38:29 compute-0 sudo[213118]: pam_unix(sudo:session): session closed for user root
Nov 29 06:38:29 compute-0 sudo[213164]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:38:29 compute-0 sudo[213164]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:38:29 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:38:29 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:38:29 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:38:29.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:38:29 compute-0 sudo[213164]: pam_unix(sudo:session): session closed for user root
Nov 29 06:38:29 compute-0 python3.9[213161]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:38:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 06:38:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 06:38:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 06:38:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 06:38:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 06:38:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 06:38:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 06:38:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 06:38:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 06:38:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 06:38:29 compute-0 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Nov 29 06:38:29 compute-0 systemd[1]: setroubleshootd.service: Deactivated successfully.
Nov 29 06:38:29 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:38:29 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:38:29 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:38:29.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:38:30 compute-0 python3.9[213310]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764398308.8015358-3367-245766001101515/.source.xml follow=False _original_basename=secret.xml.j2 checksum=63744b3abb892aaab98ed7226f328ffc66ff66bb backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:38:30 compute-0 ceph-mon[74654]: pgmap v781: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:38:30 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v782: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:38:30 compute-0 sudo[213460]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fdljttjpgzcnxrbpeatceuifksrltbrc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398310.2979887-3412-275474942630870/AnsiballZ_command.py'
Nov 29 06:38:30 compute-0 sudo[213460]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:38:30 compute-0 python3.9[213462]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine 336ec58c-893b-528f-a0c1-6ed1196bc047
                                             virsh secret-define --file /tmp/secret.xml
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:38:30 compute-0 polkitd[43682]: Registered Authentication Agent for unix-process:213464:376102 (system bus name :1.2819 [pkttyagent --process 213464 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Nov 29 06:38:30 compute-0 polkitd[43682]: Unregistered Authentication Agent for unix-process:213464:376102 (system bus name :1.2819, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Nov 29 06:38:30 compute-0 polkitd[43682]: Registered Authentication Agent for unix-process:213463:376102 (system bus name :1.2820 [pkttyagent --process 213463 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Nov 29 06:38:30 compute-0 polkitd[43682]: Unregistered Authentication Agent for unix-process:213463:376102 (system bus name :1.2820, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Nov 29 06:38:30 compute-0 sudo[213460]: pam_unix(sudo:session): session closed for user root
Nov 29 06:38:31 compute-0 ceph-mon[74654]: pgmap v782: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:38:31 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:38:31 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:38:31 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:38:31.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:38:31 compute-0 python3.9[213625]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:38:31 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:38:31 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:38:31 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:38:31.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:38:32 compute-0 sudo[213775]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yaddxqjrmytxclfwbaozfvxlbcrxipwy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398311.93021-3460-164839313426521/AnsiballZ_command.py'
Nov 29 06:38:32 compute-0 sudo[213775]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:38:32 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:38:32 compute-0 sudo[213775]: pam_unix(sudo:session): session closed for user root
Nov 29 06:38:32 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v783: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:38:33 compute-0 sudo[213953]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bgrsagqogoduwalytxrgwbyvaxqsnuut ; FSID=336ec58c-893b-528f-a0c1-6ed1196bc047 KEY=AQCBjyppAAAAABAAXQRTF6pnk4WV7TfvJo0Mjg== /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398312.7628942-3484-90110052566857/AnsiballZ_command.py'
Nov 29 06:38:33 compute-0 sudo[213953]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:38:33 compute-0 podman[213903]: 2025-11-29 06:38:33.093142885 +0000 UTC m=+0.096484366 container health_status 81ea2bcb89266a0110a379c2083d8cc042460d4a35c8ed3bf349dd1083925000 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Nov 29 06:38:33 compute-0 podman[213904]: 2025-11-29 06:38:33.129096801 +0000 UTC m=+0.120442353 container health_status b3f42e9a710907b47913576d27471d163da731262c1464357cff24681ce600c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 06:38:33 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:38:33 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:38:33 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:38:33.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:38:33 compute-0 polkitd[43682]: Registered Authentication Agent for unix-process:213976:376349 (system bus name :1.2823 [pkttyagent --process 213976 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Nov 29 06:38:33 compute-0 polkitd[43682]: Unregistered Authentication Agent for unix-process:213976:376349 (system bus name :1.2823, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Nov 29 06:38:33 compute-0 sudo[213953]: pam_unix(sudo:session): session closed for user root
Nov 29 06:38:33 compute-0 ceph-mon[74654]: pgmap v783: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:38:33 compute-0 sudo[214131]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xmqetghigybmckhuaqpequnsodjjbcjl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398313.5623703-3508-151201562360910/AnsiballZ_copy.py'
Nov 29 06:38:33 compute-0 sudo[214131]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:38:33 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:38:33 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:38:33 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:38:33.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:38:34 compute-0 python3.9[214133]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:38:34 compute-0 sudo[214131]: pam_unix(sudo:session): session closed for user root
Nov 29 06:38:34 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v784: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:38:34 compute-0 sudo[214283]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hxkavqbpukobmpdkwlvipoktrkytllsp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398314.4051373-3532-62904564504235/AnsiballZ_stat.py'
Nov 29 06:38:34 compute-0 sudo[214283]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:38:34 compute-0 python3.9[214285]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:38:35 compute-0 sudo[214283]: pam_unix(sudo:session): session closed for user root
Nov 29 06:38:35 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:38:35 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:38:35 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:38:35.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:38:35 compute-0 sudo[214407]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-igwpfreioocoubvcpwzmjvwluxjdjgbo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398314.4051373-3532-62904564504235/AnsiballZ_copy.py'
Nov 29 06:38:35 compute-0 sudo[214407]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:38:35 compute-0 python3.9[214409]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1764398314.4051373-3532-62904564504235/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:38:35 compute-0 ceph-osd[85162]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 06:38:35 compute-0 ceph-osd[85162]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.8 total, 600.0 interval
                                           Cumulative writes: 8512 writes, 34K keys, 8512 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 8512 writes, 1746 syncs, 4.88 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 628 writes, 988 keys, 628 commit groups, 1.0 writes per commit group, ingest: 0.32 MB, 0.00 MB/s
                                           Interval WAL: 628 writes, 295 syncs, 2.13 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.22              0.00         1    0.219       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.22              0.00         1    0.219       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.22              0.00         1    0.219       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.8 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.2 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5633efb6d610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.8 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5633efb6d610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.8 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5633efb6d610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.8 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5633efb6d610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.29              0.00         1    0.294       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.29              0.00         1    0.294       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.29              0.00         1    0.294       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.8 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.3 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5633efb6d610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.8 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5633efb6d610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.8 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5633efb6d610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.8 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5633efb6d770#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.8 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5633efb6d770#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.23              0.00         1    0.228       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.23              0.00         1    0.228       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.23              0.00         1    0.228       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.8 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.2 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5633efb6d770#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.8 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5633efb6d610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.8 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5633efb6d610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 29 06:38:35 compute-0 sudo[214407]: pam_unix(sudo:session): session closed for user root
Nov 29 06:38:35 compute-0 ceph-mon[74654]: pgmap v784: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:38:35 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:38:35 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000030s ======
Nov 29 06:38:35 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:38:35.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Nov 29 06:38:36 compute-0 sudo[214559]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jgnhelwowahbyxojtkuaocxydxwugiky ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398316.0075383-3580-180801649171025/AnsiballZ_file.py'
Nov 29 06:38:36 compute-0 sudo[214559]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:38:36 compute-0 python3.9[214561]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:38:36 compute-0 sudo[214559]: pam_unix(sudo:session): session closed for user root
Nov 29 06:38:36 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v785: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:38:37 compute-0 sudo[214712]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qvcfpjmxrthstdxsjveuxlggalagjqlt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398316.7505546-3604-179873735063611/AnsiballZ_stat.py'
Nov 29 06:38:37 compute-0 sudo[214712]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:38:37 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:38:37 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:38:37 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:38:37.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:38:37 compute-0 python3.9[214714]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:38:37 compute-0 sudo[214712]: pam_unix(sudo:session): session closed for user root
Nov 29 06:38:37 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:38:37 compute-0 sudo[214790]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fgjqooofcxtaiwmpireqvjyyygxxlmks ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398316.7505546-3604-179873735063611/AnsiballZ_file.py'
Nov 29 06:38:37 compute-0 sudo[214790]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:38:37 compute-0 python3.9[214792]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:38:37 compute-0 sudo[214790]: pam_unix(sudo:session): session closed for user root
Nov 29 06:38:37 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:38:37 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:38:37 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:38:37.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:38:38 compute-0 ceph-mon[74654]: pgmap v785: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:38:38 compute-0 sudo[214944]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jhkmsdrbmvsdutrznasyjaxitgwkovor ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398318.0109196-3640-209342310417438/AnsiballZ_stat.py'
Nov 29 06:38:38 compute-0 sudo[214944]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:38:38 compute-0 sshd-session[214793]: Invalid user castle from 193.163.72.91 port 56288
Nov 29 06:38:38 compute-0 python3.9[214946]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:38:38 compute-0 sudo[214944]: pam_unix(sudo:session): session closed for user root
Nov 29 06:38:38 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v786: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:38:38 compute-0 sshd-session[214793]: Received disconnect from 193.163.72.91 port 56288:11: Bye Bye [preauth]
Nov 29 06:38:38 compute-0 sshd-session[214793]: Disconnected from invalid user castle 193.163.72.91 port 56288 [preauth]
Nov 29 06:38:38 compute-0 sudo[215022]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gyuvadqfhtploxjhxyecswvoakyuuker ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398318.0109196-3640-209342310417438/AnsiballZ_file.py'
Nov 29 06:38:38 compute-0 sudo[215022]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:38:38 compute-0 python3.9[215024]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.x1jkd2id recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:38:39 compute-0 sudo[215022]: pam_unix(sudo:session): session closed for user root
Nov 29 06:38:39 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:38:39 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:38:39 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:38:39.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:38:39 compute-0 sudo[215175]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xgwtyveaeryoewhkdhmwkjmubvzfqkhs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398319.361728-3676-117639590838905/AnsiballZ_stat.py'
Nov 29 06:38:39 compute-0 sudo[215175]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:38:39 compute-0 python3.9[215177]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:38:39 compute-0 sudo[215175]: pam_unix(sudo:session): session closed for user root
Nov 29 06:38:39 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:38:39 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:38:39 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:38:39.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:38:40 compute-0 sudo[215253]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yxsmggvedmytpvgxcpaqfyzbdytuazbj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398319.361728-3676-117639590838905/AnsiballZ_file.py'
Nov 29 06:38:40 compute-0 sudo[215253]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:38:40 compute-0 python3.9[215255]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:38:40 compute-0 sudo[215253]: pam_unix(sudo:session): session closed for user root
Nov 29 06:38:40 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v787: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:38:40 compute-0 ceph-mon[74654]: pgmap v786: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:38:41 compute-0 sudo[215406]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fnwluztqhjefkpyyemtefutllcjmqkyo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398320.7229445-3715-90924512803833/AnsiballZ_command.py'
Nov 29 06:38:41 compute-0 sudo[215406]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:38:41 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:38:41 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:38:41 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:38:41.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:38:41 compute-0 python3.9[215408]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:38:41 compute-0 sudo[215406]: pam_unix(sudo:session): session closed for user root
Nov 29 06:38:41 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:38:41 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000030s ======
Nov 29 06:38:41 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:38:41.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Nov 29 06:38:42 compute-0 sudo[215561]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cswltlkmlqpzrxspfuzjilyzgwnvmwdt ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764398321.6008813-3739-104293588390854/AnsiballZ_edpm_nftables_from_files.py'
Nov 29 06:38:42 compute-0 sudo[215561]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:38:42 compute-0 python3[215563]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 29 06:38:42 compute-0 sudo[215561]: pam_unix(sudo:session): session closed for user root
Nov 29 06:38:42 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v788: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:38:42 compute-0 sshd-session[215434]: Invalid user demo from 34.92.81.41 port 33004
Nov 29 06:38:43 compute-0 sshd-session[215434]: Received disconnect from 34.92.81.41 port 33004:11: Bye Bye [preauth]
Nov 29 06:38:43 compute-0 sshd-session[215434]: Disconnected from invalid user demo 34.92.81.41 port 33004 [preauth]
Nov 29 06:38:43 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:38:43 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:38:43 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:38:43.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:38:43 compute-0 sudo[215714]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-agysfiplnsifyowxmzapupsfnstvbanp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398322.9634871-3763-1018828554282/AnsiballZ_stat.py'
Nov 29 06:38:43 compute-0 sudo[215714]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:38:43 compute-0 python3.9[215716]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:38:43 compute-0 sudo[215714]: pam_unix(sudo:session): session closed for user root
Nov 29 06:38:43 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:38:43 compute-0 sudo[215794]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qrgrncwedipyudrbgnrbkutfyurnqzjf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398322.9634871-3763-1018828554282/AnsiballZ_file.py'
Nov 29 06:38:43 compute-0 sudo[215794]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:38:43 compute-0 ceph-mon[74654]: pgmap v787: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:38:43 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:38:43 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:38:43 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:38:43.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:38:44 compute-0 python3.9[215796]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:38:44 compute-0 sudo[215794]: pam_unix(sudo:session): session closed for user root
Nov 29 06:38:44 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v789: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:38:44 compute-0 sshd-session[215797]: Received disconnect from 103.143.238.173 port 45046:11: Bye Bye [preauth]
Nov 29 06:38:44 compute-0 sshd-session[215797]: Disconnected from authenticating user root 103.143.238.173 port 45046 [preauth]
Nov 29 06:38:44 compute-0 sudo[215950]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iehanhciwsxbtiwqulahvibwjevqfhms ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398324.2936954-3799-174758331297567/AnsiballZ_stat.py'
Nov 29 06:38:44 compute-0 sudo[215950]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:38:44 compute-0 sshd-session[215806]: Invalid user hamed from 31.6.212.12 port 49150
Nov 29 06:38:44 compute-0 python3.9[215952]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:38:44 compute-0 sudo[215950]: pam_unix(sudo:session): session closed for user root
Nov 29 06:38:44 compute-0 sshd-session[215806]: Received disconnect from 31.6.212.12 port 49150:11: Bye Bye [preauth]
Nov 29 06:38:44 compute-0 sshd-session[215806]: Disconnected from invalid user hamed 31.6.212.12 port 49150 [preauth]
Nov 29 06:38:45 compute-0 ceph-mon[74654]: pgmap v788: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:38:45 compute-0 sshd-session[215742]: Invalid user packer from 103.63.25.115 port 48822
Nov 29 06:38:45 compute-0 sudo[216029]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-guhbbsljigbsrqsslxczdbxmbxjpeykz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398324.2936954-3799-174758331297567/AnsiballZ_file.py'
Nov 29 06:38:45 compute-0 sudo[216029]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:38:45 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:38:45 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:38:45 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:38:45.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:38:45 compute-0 python3.9[216031]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:38:45 compute-0 sudo[216029]: pam_unix(sudo:session): session closed for user root
Nov 29 06:38:45 compute-0 sshd-session[215742]: Received disconnect from 103.63.25.115 port 48822:11: Bye Bye [preauth]
Nov 29 06:38:45 compute-0 sshd-session[215742]: Disconnected from invalid user packer 103.63.25.115 port 48822 [preauth]
Nov 29 06:38:45 compute-0 sudo[216181]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eafipfgowrwfbwagrpfwnrhtzilzfhpn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398325.5717463-3835-97257380698955/AnsiballZ_stat.py'
Nov 29 06:38:45 compute-0 sudo[216181]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:38:45 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:38:45 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:38:45 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:38:45.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:38:46 compute-0 python3.9[216183]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:38:46 compute-0 sudo[216181]: pam_unix(sudo:session): session closed for user root
Nov 29 06:38:46 compute-0 sudo[216259]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mtewoiphghyuaplphahpvpgbmsleqvqj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398325.5717463-3835-97257380698955/AnsiballZ_file.py'
Nov 29 06:38:46 compute-0 sudo[216259]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:38:46 compute-0 python3.9[216261]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:38:46 compute-0 sudo[216259]: pam_unix(sudo:session): session closed for user root
Nov 29 06:38:46 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v790: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:38:47 compute-0 ceph-mon[74654]: pgmap v789: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:38:47 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:38:47 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:38:47 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:38:47.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:38:47 compute-0 sudo[216412]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pnbnvlqarleevlnycvkshdqyxfpmbdub ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398327.1490717-3871-247016206881279/AnsiballZ_stat.py'
Nov 29 06:38:47 compute-0 sudo[216412]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:38:47 compute-0 python3.9[216414]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:38:47 compute-0 sudo[216412]: pam_unix(sudo:session): session closed for user root
Nov 29 06:38:47 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:38:47 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:38:47 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:38:47.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:38:47 compute-0 sudo[216490]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aexzqbgonnshkouesbgzgulgdsufpgum ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398327.1490717-3871-247016206881279/AnsiballZ_file.py'
Nov 29 06:38:47 compute-0 sudo[216490]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:38:48 compute-0 ceph-mon[74654]: pgmap v790: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:38:48 compute-0 python3.9[216492]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:38:48 compute-0 sudo[216490]: pam_unix(sudo:session): session closed for user root
Nov 29 06:38:48 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v791: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:38:48 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:38:48 compute-0 sudo[216642]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hnafylvcyjjxnwviwnkvsrpbtvhnexbi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398328.3904643-3907-249409543387922/AnsiballZ_stat.py'
Nov 29 06:38:48 compute-0 sudo[216642]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:38:49 compute-0 ceph-mon[74654]: pgmap v791: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:38:49 compute-0 python3.9[216644]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:38:49 compute-0 sudo[216642]: pam_unix(sudo:session): session closed for user root
Nov 29 06:38:49 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:38:49 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:38:49 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:38:49.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:38:49 compute-0 sudo[216688]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:38:49 compute-0 sudo[216688]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:38:49 compute-0 sudo[216688]: pam_unix(sudo:session): session closed for user root
Nov 29 06:38:49 compute-0 sudo[216737]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:38:49 compute-0 sudo[216737]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:38:49 compute-0 sudo[216737]: pam_unix(sudo:session): session closed for user root
Nov 29 06:38:49 compute-0 sudo[216818]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rlqibfyyzdwdcahehhxvjmlndfdsrenv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398328.3904643-3907-249409543387922/AnsiballZ_copy.py'
Nov 29 06:38:49 compute-0 sudo[216818]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:38:49 compute-0 python3.9[216820]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764398328.3904643-3907-249409543387922/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:38:49 compute-0 sudo[216818]: pam_unix(sudo:session): session closed for user root
Nov 29 06:38:49 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:38:49 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:38:49 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:38:49.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:38:50 compute-0 sudo[216970]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bwubhofqucaesfkoasfvqktvffwnfryb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398329.9693666-3952-23809648804642/AnsiballZ_file.py'
Nov 29 06:38:50 compute-0 sudo[216970]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:38:50 compute-0 python3.9[216972]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:38:50 compute-0 sudo[216970]: pam_unix(sudo:session): session closed for user root
Nov 29 06:38:50 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v792: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:38:51 compute-0 sudo[217125]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-emmeuqqzojegfpxlvzcdvtqndvejkhez ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398330.790627-3976-151781651711588/AnsiballZ_command.py'
Nov 29 06:38:51 compute-0 sudo[217125]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:38:51 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:38:51 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:38:51 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:38:51.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:38:51 compute-0 sshd-session[216973]: Received disconnect from 197.13.24.157 port 51458:11: Bye Bye [preauth]
Nov 29 06:38:51 compute-0 sshd-session[216973]: Disconnected from authenticating user root 197.13.24.157 port 51458 [preauth]
Nov 29 06:38:51 compute-0 python3.9[217127]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:38:51 compute-0 sudo[217125]: pam_unix(sudo:session): session closed for user root
Nov 29 06:38:51 compute-0 ceph-mon[74654]: pgmap v792: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:38:51 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:38:51 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:38:51 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:38:51.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:38:52 compute-0 sudo[217280]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bdpnzjuglbsshrjyindwzxtwgklxrccs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398331.6296906-4000-27940417889306/AnsiballZ_blockinfile.py'
Nov 29 06:38:52 compute-0 sudo[217280]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:38:52 compute-0 python3.9[217282]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:38:52 compute-0 sudo[217280]: pam_unix(sudo:session): session closed for user root
Nov 29 06:38:52 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v793: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:38:53 compute-0 ceph-mgr[74948]: [devicehealth INFO root] Check health
Nov 29 06:38:53 compute-0 sudo[217433]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lcddqxhhqpyzwzpanesobugkhsbjtebq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398332.9658673-4027-69617239200309/AnsiballZ_command.py'
Nov 29 06:38:53 compute-0 sudo[217433]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:38:53 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:38:53 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:38:53 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:38:53.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:38:53 compute-0 python3.9[217435]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:38:53 compute-0 sudo[217433]: pam_unix(sudo:session): session closed for user root
Nov 29 06:38:53 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:38:53 compute-0 ceph-mon[74654]: pgmap v793: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:38:53 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:38:53 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:38:53 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:38:53.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:38:54 compute-0 sudo[217586]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-txctewreuotzmfwlnaofftlnfewnfwoi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398333.7476356-4051-245309231673600/AnsiballZ_stat.py'
Nov 29 06:38:54 compute-0 sudo[217586]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:38:54 compute-0 python3.9[217588]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 06:38:54 compute-0 ceph-mgr[74948]: [balancer INFO root] Optimize plan auto_2025-11-29_06:38:54
Nov 29 06:38:54 compute-0 ceph-mgr[74948]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 06:38:54 compute-0 ceph-mgr[74948]: [balancer INFO root] do_upmap
Nov 29 06:38:54 compute-0 ceph-mgr[74948]: [balancer INFO root] pools ['default.rgw.meta', 'backups', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.control', 'vms', '.rgw.root', 'images', '.mgr']
Nov 29 06:38:54 compute-0 ceph-mgr[74948]: [balancer INFO root] prepared 0/10 changes
Nov 29 06:38:54 compute-0 sudo[217586]: pam_unix(sudo:session): session closed for user root
Nov 29 06:38:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:38:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:38:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:38:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:38:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:38:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:38:54 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v794: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:38:54 compute-0 sudo[217740]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vjnwmqmrlweglvhngkwmymcoztglscao ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398334.546834-4075-172868034209451/AnsiballZ_command.py'
Nov 29 06:38:54 compute-0 sudo[217740]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:38:55 compute-0 python3.9[217742]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:38:55 compute-0 sudo[217740]: pam_unix(sudo:session): session closed for user root
Nov 29 06:38:55 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:38:55 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:38:55 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:38:55.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:38:55 compute-0 ceph-mon[74654]: pgmap v794: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:38:55 compute-0 sudo[217896]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rzpxevqvzimjzjqniyeghwpuzgdbkoht ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398335.325246-4099-227629163655223/AnsiballZ_file.py'
Nov 29 06:38:55 compute-0 sudo[217896]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:38:55 compute-0 python3.9[217898]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:38:55 compute-0 sudo[217896]: pam_unix(sudo:session): session closed for user root
Nov 29 06:38:55 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:38:55 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:38:55 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:38:55.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:38:56 compute-0 sudo[218048]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mkaapdfrizdovowwumcannvkqlphrjig ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398336.1421137-4123-241530808499150/AnsiballZ_stat.py'
Nov 29 06:38:56 compute-0 sudo[218048]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:38:56 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v795: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:38:56 compute-0 python3.9[218050]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:38:56 compute-0 sudo[218048]: pam_unix(sudo:session): session closed for user root
Nov 29 06:38:57 compute-0 sudo[218172]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjqxhjuhqeycvqcfwweamserddstxdsk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398336.1421137-4123-241530808499150/AnsiballZ_copy.py'
Nov 29 06:38:57 compute-0 sudo[218172]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:38:57 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:38:57 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:38:57 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:38:57.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:38:57 compute-0 python3.9[218174]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764398336.1421137-4123-241530808499150/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:38:57 compute-0 ceph-mon[74654]: pgmap v795: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:38:57 compute-0 sudo[218172]: pam_unix(sudo:session): session closed for user root
Nov 29 06:38:57 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:38:57 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:38:57 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:38:57.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:38:58 compute-0 sudo[218324]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qistiflvevcrfrmsfnkenxozoepauysb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398337.8909786-4168-137832301099940/AnsiballZ_stat.py'
Nov 29 06:38:58 compute-0 sudo[218324]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:38:58 compute-0 python3.9[218326]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:38:58 compute-0 sudo[218324]: pam_unix(sudo:session): session closed for user root
Nov 29 06:38:58 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v796: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:38:58 compute-0 sudo[218447]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kbqzcpnoehypftsvrsenjvelsboqsbcu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398337.8909786-4168-137832301099940/AnsiballZ_copy.py'
Nov 29 06:38:58 compute-0 sudo[218447]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:38:58 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:38:59 compute-0 python3.9[218449]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764398337.8909786-4168-137832301099940/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:38:59 compute-0 sudo[218447]: pam_unix(sudo:session): session closed for user root
Nov 29 06:38:59 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:38:59 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:38:59 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:38:59.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:38:59 compute-0 sudo[218600]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mnzcqubqfdjgxrxlbpnqdgsogarbvngi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398339.3047419-4213-168083904604660/AnsiballZ_stat.py'
Nov 29 06:38:59 compute-0 sudo[218600]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:38:59 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:38:59 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:38:59 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:38:59.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:38:59 compute-0 python3.9[218602]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:38:59 compute-0 sudo[218600]: pam_unix(sudo:session): session closed for user root
Nov 29 06:39:00 compute-0 ceph-mon[74654]: pgmap v796: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:39:00 compute-0 sudo[218725]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pnqssxcgbjiaqufqjxhgznbuuzizghpl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398339.3047419-4213-168083904604660/AnsiballZ_copy.py'
Nov 29 06:39:00 compute-0 sudo[218725]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:39:00 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v797: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:39:00 compute-0 python3.9[218727]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764398339.3047419-4213-168083904604660/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:39:00 compute-0 sudo[218725]: pam_unix(sudo:session): session closed for user root
Nov 29 06:39:01 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:39:01 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:39:01 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:39:01.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:39:01 compute-0 sudo[218878]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bvzfhxnmfwpoapovnxzrnuoryquaafrg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398340.952494-4258-166375081143737/AnsiballZ_systemd.py'
Nov 29 06:39:01 compute-0 sudo[218878]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:39:01 compute-0 python3.9[218880]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 06:39:01 compute-0 systemd[1]: Reloading.
Nov 29 06:39:01 compute-0 ceph-mon[74654]: pgmap v797: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:39:01 compute-0 systemd-rc-local-generator[218905]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 06:39:01 compute-0 systemd-sysv-generator[218909]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 06:39:01 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:39:01 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:39:01 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:39:01.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:39:02 compute-0 systemd[1]: Reached target edpm_libvirt.target.
Nov 29 06:39:02 compute-0 sudo[218878]: pam_unix(sudo:session): session closed for user root
Nov 29 06:39:02 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v798: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:39:02 compute-0 sudo[219068]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eiebhlhonskwpdizgosavhulynkbfils ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398342.3269217-4282-131379917493118/AnsiballZ_systemd.py'
Nov 29 06:39:02 compute-0 sudo[219068]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:39:03 compute-0 python3.9[219070]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Nov 29 06:39:03 compute-0 systemd[1]: Reloading.
Nov 29 06:39:03 compute-0 systemd-rc-local-generator[219099]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 06:39:03 compute-0 systemd-sysv-generator[219102]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 06:39:03 compute-0 ceph-mon[74654]: pgmap v798: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:39:03 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:39:03 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:39:03 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:39:03.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:39:03 compute-0 systemd[1]: Reloading.
Nov 29 06:39:03 compute-0 podman[219108]: 2025-11-29 06:39:03.453385426 +0000 UTC m=+0.061686585 container health_status 81ea2bcb89266a0110a379c2083d8cc042460d4a35c8ed3bf349dd1083925000 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125)
Nov 29 06:39:03 compute-0 podman[219109]: 2025-11-29 06:39:03.478501339 +0000 UTC m=+0.093179981 container health_status b3f42e9a710907b47913576d27471d163da731262c1464357cff24681ce600c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 06:39:03 compute-0 systemd-rc-local-generator[219174]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 06:39:03 compute-0 systemd-sysv-generator[219177]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 06:39:03 compute-0 sudo[219068]: pam_unix(sudo:session): session closed for user root
Nov 29 06:39:03 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:39:03 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:39:03 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:39:03.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:39:03 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:39:04 compute-0 sshd-session[157892]: Connection closed by 192.168.122.30 port 41188
Nov 29 06:39:04 compute-0 sshd-session[157889]: pam_unix(sshd:session): session closed for user zuul
Nov 29 06:39:04 compute-0 systemd[1]: session-49.scope: Deactivated successfully.
Nov 29 06:39:04 compute-0 systemd[1]: session-49.scope: Consumed 3min 48.229s CPU time.
Nov 29 06:39:04 compute-0 systemd-logind[797]: Session 49 logged out. Waiting for processes to exit.
Nov 29 06:39:04 compute-0 systemd-logind[797]: Removed session 49.
Nov 29 06:39:04 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v799: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:39:05 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:39:05 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:39:05 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:39:05.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:39:05 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:39:05 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:39:05 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:39:05.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:39:06 compute-0 ceph-mon[74654]: pgmap v799: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:39:06 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v800: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:39:07 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:39:07 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:39:07 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:39:07.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:39:07 compute-0 ceph-mon[74654]: pgmap v800: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:39:07 compute-0 sudo[219217]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:39:07 compute-0 sudo[219217]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:39:07 compute-0 sudo[219217]: pam_unix(sudo:session): session closed for user root
Nov 29 06:39:07 compute-0 sudo[219242]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:39:07 compute-0 sudo[219242]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:39:07 compute-0 sudo[219242]: pam_unix(sudo:session): session closed for user root
Nov 29 06:39:07 compute-0 sudo[219267]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:39:07 compute-0 sudo[219267]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:39:07 compute-0 sudo[219267]: pam_unix(sudo:session): session closed for user root
Nov 29 06:39:07 compute-0 sudo[219292]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Nov 29 06:39:07 compute-0 sudo[219292]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:39:07 compute-0 sshd-session[219212]: Invalid user backend from 45.78.221.93 port 46580
Nov 29 06:39:07 compute-0 sudo[219292]: pam_unix(sudo:session): session closed for user root
Nov 29 06:39:07 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 06:39:07 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:39:07 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:39:07 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:39:07.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:39:08 compute-0 sshd-session[219212]: Received disconnect from 45.78.221.93 port 46580:11: Bye Bye [preauth]
Nov 29 06:39:08 compute-0 sshd-session[219212]: Disconnected from invalid user backend 45.78.221.93 port 46580 [preauth]
Nov 29 06:39:08 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 06:39:08 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:39:08 compute-0 sshd-session[219214]: Received disconnect from 118.193.39.127 port 50864:11: Bye Bye [preauth]
Nov 29 06:39:08 compute-0 sshd-session[219214]: Disconnected from authenticating user root 118.193.39.127 port 50864 [preauth]
Nov 29 06:39:08 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v801: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:39:09 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 06:39:09 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:39:09 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:39:09 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:39:09 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:39:09 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:39:09.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:39:09 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 06:39:09 compute-0 sudo[219338]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:39:09 compute-0 sudo[219338]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:39:09 compute-0 sudo[219338]: pam_unix(sudo:session): session closed for user root
Nov 29 06:39:09 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:39:09 compute-0 ceph-mon[74654]: pgmap v801: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:39:09 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:39:09 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:39:09 compute-0 sudo[219363]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:39:09 compute-0 sudo[219363]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:39:09 compute-0 sudo[219363]: pam_unix(sudo:session): session closed for user root
Nov 29 06:39:09 compute-0 sudo[219381]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:39:09 compute-0 sudo[219381]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:39:09 compute-0 sudo[219381]: pam_unix(sudo:session): session closed for user root
Nov 29 06:39:09 compute-0 sudo[219413]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:39:09 compute-0 sudo[219413]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:39:09 compute-0 sudo[219413]: pam_unix(sudo:session): session closed for user root
Nov 29 06:39:09 compute-0 sudo[219438]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:39:09 compute-0 sudo[219438]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:39:09 compute-0 sudo[219438]: pam_unix(sudo:session): session closed for user root
Nov 29 06:39:09 compute-0 sudo[219463]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 06:39:09 compute-0 sudo[219463]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:39:09 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:39:09 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:39:09 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:39:09.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:39:09 compute-0 sshd-session[219499]: Accepted publickey for zuul from 192.168.122.30 port 56050 ssh2: ECDSA SHA256:q0RMlXdalxA6snNWza7TmIndlwLWLLpO+sXhiGKqO/I
Nov 29 06:39:09 compute-0 systemd-logind[797]: New session 50 of user zuul.
Nov 29 06:39:10 compute-0 systemd[1]: Started Session 50 of User zuul.
Nov 29 06:39:10 compute-0 sshd-session[219499]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 06:39:10 compute-0 sudo[219463]: pam_unix(sudo:session): session closed for user root
Nov 29 06:39:10 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 06:39:10 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:39:10 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 06:39:10 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 06:39:10 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 06:39:10 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:39:10 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev 1c1e8f87-0fcd-4288-8e35-32cfc1289060 does not exist
Nov 29 06:39:10 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev 8c7202fb-4aa0-4419-80ad-f33df4d20ca5 does not exist
Nov 29 06:39:10 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev 09f86b8d-9541-4d4c-adb7-5a60fd836bee does not exist
Nov 29 06:39:10 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v802: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:39:10 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 06:39:10 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 06:39:10 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 06:39:10 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 06:39:10 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 06:39:10 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:39:10 compute-0 sudo[219661]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:39:10 compute-0 sudo[219661]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:39:10 compute-0 sudo[219661]: pam_unix(sudo:session): session closed for user root
Nov 29 06:39:10 compute-0 sudo[219697]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:39:10 compute-0 sudo[219697]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:39:10 compute-0 sudo[219697]: pam_unix(sudo:session): session closed for user root
Nov 29 06:39:11 compute-0 sudo[219723]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:39:11 compute-0 sudo[219723]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:39:11 compute-0 sudo[219723]: pam_unix(sudo:session): session closed for user root
Nov 29 06:39:11 compute-0 sudo[219748]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Nov 29 06:39:11 compute-0 sudo[219748]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:39:11 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:39:11 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:39:11 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:39:11 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:39:11 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 06:39:11 compute-0 python3.9[219684]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 06:39:11 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:39:11 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:39:11 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:39:11.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:39:11 compute-0 podman[219817]: 2025-11-29 06:39:11.420189894 +0000 UTC m=+0.020507651 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:39:11 compute-0 podman[219817]: 2025-11-29 06:39:11.685217246 +0000 UTC m=+0.285535013 container create 865c97f4308024094ab2cabcce7f75fec22e0959d1c280d8795b998012bc4364 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_lamarr, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 06:39:11 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:39:11 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:39:11 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:39:11.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:39:12 compute-0 systemd[1]: Started libpod-conmon-865c97f4308024094ab2cabcce7f75fec22e0959d1c280d8795b998012bc4364.scope.
Nov 29 06:39:12 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:39:12 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v803: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:39:12 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:39:12 compute-0 ceph-mon[74654]: pgmap v802: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:39:12 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 06:39:12 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 06:39:12 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:39:12 compute-0 python3.9[219985]: ansible-ansible.builtin.service_facts Invoked
Nov 29 06:39:12 compute-0 network[220002]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 29 06:39:12 compute-0 network[220003]: 'network-scripts' will be removed from distribution in near future.
Nov 29 06:39:12 compute-0 network[220004]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 29 06:39:12 compute-0 podman[219817]: 2025-11-29 06:39:12.936469233 +0000 UTC m=+1.536787000 container init 865c97f4308024094ab2cabcce7f75fec22e0959d1c280d8795b998012bc4364 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_lamarr, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 06:39:12 compute-0 podman[219817]: 2025-11-29 06:39:12.948663934 +0000 UTC m=+1.548981701 container start 865c97f4308024094ab2cabcce7f75fec22e0959d1c280d8795b998012bc4364 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_lamarr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 06:39:12 compute-0 trusting_lamarr[219909]: 167 167
Nov 29 06:39:12 compute-0 systemd[1]: libpod-865c97f4308024094ab2cabcce7f75fec22e0959d1c280d8795b998012bc4364.scope: Deactivated successfully.
Nov 29 06:39:12 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 06:39:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:39:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 06:39:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:39:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:39:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:39:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:39:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:39:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:39:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:39:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:39:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:39:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 06:39:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:39:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:39:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:39:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Nov 29 06:39:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:39:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 06:39:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:39:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:39:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:39:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 06:39:13 compute-0 podman[219817]: 2025-11-29 06:39:13.050304097 +0000 UTC m=+1.650621844 container attach 865c97f4308024094ab2cabcce7f75fec22e0959d1c280d8795b998012bc4364 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_lamarr, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 29 06:39:13 compute-0 podman[219817]: 2025-11-29 06:39:13.051782749 +0000 UTC m=+1.652100516 container died 865c97f4308024094ab2cabcce7f75fec22e0959d1c280d8795b998012bc4364 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_lamarr, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 06:39:13 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:39:13 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:39:13 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:39:13.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:39:13 compute-0 ceph-mon[74654]: pgmap v803: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:39:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-1d79babae9b4010d393d4bb81fb88263c3811ec6a393b7652914ec5625606f99-merged.mount: Deactivated successfully.
Nov 29 06:39:13 compute-0 podman[219817]: 2025-11-29 06:39:13.945861493 +0000 UTC m=+2.546179230 container remove 865c97f4308024094ab2cabcce7f75fec22e0959d1c280d8795b998012bc4364 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_lamarr, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 29 06:39:13 compute-0 systemd[1]: libpod-conmon-865c97f4308024094ab2cabcce7f75fec22e0959d1c280d8795b998012bc4364.scope: Deactivated successfully.
Nov 29 06:39:13 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:39:13 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:39:13 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:39:13.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:39:14 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:39:14 compute-0 podman[220051]: 2025-11-29 06:39:14.165450109 +0000 UTC m=+0.053637064 container create 22b5ba8bcffd7a88f044efb63046fd4afa466cb6a5ee00c1242d2bd1cd0114f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_babbage, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 06:39:14 compute-0 systemd[1]: Started libpod-conmon-22b5ba8bcffd7a88f044efb63046fd4afa466cb6a5ee00c1242d2bd1cd0114f1.scope.
Nov 29 06:39:14 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:39:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7869565c3628bfda9c5488ac64eeec0674598dc3fec0762638b11867cb53ab9b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 06:39:14 compute-0 podman[220051]: 2025-11-29 06:39:14.135478537 +0000 UTC m=+0.023665522 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:39:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7869565c3628bfda9c5488ac64eeec0674598dc3fec0762638b11867cb53ab9b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:39:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7869565c3628bfda9c5488ac64eeec0674598dc3fec0762638b11867cb53ab9b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:39:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7869565c3628bfda9c5488ac64eeec0674598dc3fec0762638b11867cb53ab9b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 06:39:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7869565c3628bfda9c5488ac64eeec0674598dc3fec0762638b11867cb53ab9b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 06:39:14 compute-0 podman[220051]: 2025-11-29 06:39:14.580553087 +0000 UTC m=+0.468740102 container init 22b5ba8bcffd7a88f044efb63046fd4afa466cb6a5ee00c1242d2bd1cd0114f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_babbage, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 29 06:39:14 compute-0 podman[220051]: 2025-11-29 06:39:14.5910732 +0000 UTC m=+0.479260165 container start 22b5ba8bcffd7a88f044efb63046fd4afa466cb6a5ee00c1242d2bd1cd0114f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_babbage, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 29 06:39:14 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v804: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:39:14 compute-0 podman[220051]: 2025-11-29 06:39:14.74654215 +0000 UTC m=+0.634729115 container attach 22b5ba8bcffd7a88f044efb63046fd4afa466cb6a5ee00c1242d2bd1cd0114f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_babbage, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 29 06:39:14 compute-0 sshd-session[220024]: Invalid user gits from 103.147.159.91 port 54186
Nov 29 06:39:15 compute-0 sshd-session[220024]: Received disconnect from 103.147.159.91 port 54186:11: Bye Bye [preauth]
Nov 29 06:39:15 compute-0 sshd-session[220024]: Disconnected from invalid user gits 103.147.159.91 port 54186 [preauth]
Nov 29 06:39:15 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:39:15 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:39:15 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:39:15.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:39:15 compute-0 optimistic_babbage[220071]: --> passed data devices: 0 physical, 1 LVM
Nov 29 06:39:15 compute-0 optimistic_babbage[220071]: --> relative data size: 1.0
Nov 29 06:39:15 compute-0 optimistic_babbage[220071]: --> All data devices are unavailable
Nov 29 06:39:15 compute-0 systemd[1]: libpod-22b5ba8bcffd7a88f044efb63046fd4afa466cb6a5ee00c1242d2bd1cd0114f1.scope: Deactivated successfully.
Nov 29 06:39:15 compute-0 podman[220051]: 2025-11-29 06:39:15.438162682 +0000 UTC m=+1.326349657 container died 22b5ba8bcffd7a88f044efb63046fd4afa466cb6a5ee00c1242d2bd1cd0114f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_babbage, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 06:39:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-7869565c3628bfda9c5488ac64eeec0674598dc3fec0762638b11867cb53ab9b-merged.mount: Deactivated successfully.
Nov 29 06:39:15 compute-0 podman[220051]: 2025-11-29 06:39:15.512801128 +0000 UTC m=+1.400988093 container remove 22b5ba8bcffd7a88f044efb63046fd4afa466cb6a5ee00c1242d2bd1cd0114f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_babbage, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 29 06:39:15 compute-0 systemd[1]: libpod-conmon-22b5ba8bcffd7a88f044efb63046fd4afa466cb6a5ee00c1242d2bd1cd0114f1.scope: Deactivated successfully.
Nov 29 06:39:15 compute-0 sudo[219748]: pam_unix(sudo:session): session closed for user root
Nov 29 06:39:15 compute-0 sudo[220171]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:39:15 compute-0 sudo[220171]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:39:15 compute-0 sudo[220171]: pam_unix(sudo:session): session closed for user root
Nov 29 06:39:15 compute-0 sudo[220199]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:39:15 compute-0 sudo[220199]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:39:15 compute-0 sudo[220199]: pam_unix(sudo:session): session closed for user root
Nov 29 06:39:15 compute-0 sudo[220228]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:39:15 compute-0 sudo[220228]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:39:15 compute-0 sudo[220228]: pam_unix(sudo:session): session closed for user root
Nov 29 06:39:15 compute-0 sudo[220256]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -- lvm list --format json
Nov 29 06:39:15 compute-0 sudo[220256]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:39:15 compute-0 ceph-mon[74654]: pgmap v804: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:39:15 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:39:15 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:39:15 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:39:15.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:39:16 compute-0 podman[220337]: 2025-11-29 06:39:16.087439355 +0000 UTC m=+0.018939846 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:39:16 compute-0 podman[220337]: 2025-11-29 06:39:16.279498699 +0000 UTC m=+0.210999210 container create 71c4962f8a4bb37d6dc9cb843638ae378d123605461016c855bdaaa36d0c5a40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_pare, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 06:39:16 compute-0 systemd[1]: Started libpod-conmon-71c4962f8a4bb37d6dc9cb843638ae378d123605461016c855bdaaa36d0c5a40.scope.
Nov 29 06:39:16 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:39:16 compute-0 podman[220337]: 2025-11-29 06:39:16.374374827 +0000 UTC m=+0.305875408 container init 71c4962f8a4bb37d6dc9cb843638ae378d123605461016c855bdaaa36d0c5a40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_pare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 06:39:16 compute-0 podman[220337]: 2025-11-29 06:39:16.380625517 +0000 UTC m=+0.312126028 container start 71c4962f8a4bb37d6dc9cb843638ae378d123605461016c855bdaaa36d0c5a40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_pare, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True)
Nov 29 06:39:16 compute-0 podman[220337]: 2025-11-29 06:39:16.385467676 +0000 UTC m=+0.316968147 container attach 71c4962f8a4bb37d6dc9cb843638ae378d123605461016c855bdaaa36d0c5a40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_pare, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 29 06:39:16 compute-0 reverent_pare[220371]: 167 167
Nov 29 06:39:16 compute-0 podman[220337]: 2025-11-29 06:39:16.390489281 +0000 UTC m=+0.321989742 container died 71c4962f8a4bb37d6dc9cb843638ae378d123605461016c855bdaaa36d0c5a40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_pare, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 29 06:39:16 compute-0 systemd[1]: libpod-71c4962f8a4bb37d6dc9cb843638ae378d123605461016c855bdaaa36d0c5a40.scope: Deactivated successfully.
Nov 29 06:39:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-5774b7570e26ea6369989f781e66a727a7c67e3603efa4ae59b63486b9ef6a4a-merged.mount: Deactivated successfully.
Nov 29 06:39:16 compute-0 podman[220337]: 2025-11-29 06:39:16.429827282 +0000 UTC m=+0.361327773 container remove 71c4962f8a4bb37d6dc9cb843638ae378d123605461016c855bdaaa36d0c5a40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_pare, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 06:39:16 compute-0 systemd[1]: libpod-conmon-71c4962f8a4bb37d6dc9cb843638ae378d123605461016c855bdaaa36d0c5a40.scope: Deactivated successfully.
Nov 29 06:39:16 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v805: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:39:16 compute-0 podman[220394]: 2025-11-29 06:39:16.632439429 +0000 UTC m=+0.058865154 container create 62d81ce0d977f159029f326819d5de9789558f6635438063ac80519cbb76afa9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_archimedes, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 29 06:39:16 compute-0 systemd[1]: Started libpod-conmon-62d81ce0d977f159029f326819d5de9789558f6635438063ac80519cbb76afa9.scope.
Nov 29 06:39:16 compute-0 podman[220394]: 2025-11-29 06:39:16.604479105 +0000 UTC m=+0.030904920 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:39:16 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:39:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/556c37e1b0fc497e8d465d666bf7dce3d111b53d8e4d6ae91da8663671bca2ef/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 06:39:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/556c37e1b0fc497e8d465d666bf7dce3d111b53d8e4d6ae91da8663671bca2ef/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:39:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/556c37e1b0fc497e8d465d666bf7dce3d111b53d8e4d6ae91da8663671bca2ef/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:39:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/556c37e1b0fc497e8d465d666bf7dce3d111b53d8e4d6ae91da8663671bca2ef/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 06:39:16 compute-0 podman[220394]: 2025-11-29 06:39:16.718174425 +0000 UTC m=+0.144600210 container init 62d81ce0d977f159029f326819d5de9789558f6635438063ac80519cbb76afa9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_archimedes, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 06:39:16 compute-0 podman[220394]: 2025-11-29 06:39:16.725925978 +0000 UTC m=+0.152351713 container start 62d81ce0d977f159029f326819d5de9789558f6635438063ac80519cbb76afa9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_archimedes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 29 06:39:16 compute-0 podman[220394]: 2025-11-29 06:39:16.729762669 +0000 UTC m=+0.156188424 container attach 62d81ce0d977f159029f326819d5de9789558f6635438063ac80519cbb76afa9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_archimedes, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 29 06:39:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:39:17.226 157767 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 06:39:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:39:17.227 157767 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 06:39:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:39:17.228 157767 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 06:39:17 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:39:17 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:39:17 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:39:17.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:39:17 compute-0 sudo[220544]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-brrygdcfsjdvysjjdgnftoulqvbsolox ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398357.2166178-106-179855636243511/AnsiballZ_setup.py'
Nov 29 06:39:17 compute-0 sudo[220544]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:39:17 compute-0 adoring_archimedes[220411]: {
Nov 29 06:39:17 compute-0 adoring_archimedes[220411]:     "1": [
Nov 29 06:39:17 compute-0 adoring_archimedes[220411]:         {
Nov 29 06:39:17 compute-0 adoring_archimedes[220411]:             "devices": [
Nov 29 06:39:17 compute-0 adoring_archimedes[220411]:                 "/dev/loop3"
Nov 29 06:39:17 compute-0 adoring_archimedes[220411]:             ],
Nov 29 06:39:17 compute-0 adoring_archimedes[220411]:             "lv_name": "ceph_lv0",
Nov 29 06:39:17 compute-0 adoring_archimedes[220411]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 06:39:17 compute-0 adoring_archimedes[220411]:             "lv_size": "7511998464",
Nov 29 06:39:17 compute-0 adoring_archimedes[220411]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=336ec58c-893b-528f-a0c1-6ed1196bc047,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=91f280f1-e534-4adc-bf70-98711580c2dd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 06:39:17 compute-0 adoring_archimedes[220411]:             "lv_uuid": "G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP",
Nov 29 06:39:17 compute-0 adoring_archimedes[220411]:             "name": "ceph_lv0",
Nov 29 06:39:17 compute-0 adoring_archimedes[220411]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 06:39:17 compute-0 adoring_archimedes[220411]:             "tags": {
Nov 29 06:39:17 compute-0 adoring_archimedes[220411]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 06:39:17 compute-0 adoring_archimedes[220411]:                 "ceph.block_uuid": "G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP",
Nov 29 06:39:17 compute-0 adoring_archimedes[220411]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 06:39:17 compute-0 adoring_archimedes[220411]:                 "ceph.cluster_fsid": "336ec58c-893b-528f-a0c1-6ed1196bc047",
Nov 29 06:39:17 compute-0 adoring_archimedes[220411]:                 "ceph.cluster_name": "ceph",
Nov 29 06:39:17 compute-0 adoring_archimedes[220411]:                 "ceph.crush_device_class": "",
Nov 29 06:39:17 compute-0 adoring_archimedes[220411]:                 "ceph.encrypted": "0",
Nov 29 06:39:17 compute-0 adoring_archimedes[220411]:                 "ceph.osd_fsid": "91f280f1-e534-4adc-bf70-98711580c2dd",
Nov 29 06:39:17 compute-0 adoring_archimedes[220411]:                 "ceph.osd_id": "1",
Nov 29 06:39:17 compute-0 adoring_archimedes[220411]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 06:39:17 compute-0 adoring_archimedes[220411]:                 "ceph.type": "block",
Nov 29 06:39:17 compute-0 adoring_archimedes[220411]:                 "ceph.vdo": "0"
Nov 29 06:39:17 compute-0 adoring_archimedes[220411]:             },
Nov 29 06:39:17 compute-0 adoring_archimedes[220411]:             "type": "block",
Nov 29 06:39:17 compute-0 adoring_archimedes[220411]:             "vg_name": "ceph_vg0"
Nov 29 06:39:17 compute-0 adoring_archimedes[220411]:         }
Nov 29 06:39:17 compute-0 adoring_archimedes[220411]:     ]
Nov 29 06:39:17 compute-0 adoring_archimedes[220411]: }
Nov 29 06:39:17 compute-0 systemd[1]: libpod-62d81ce0d977f159029f326819d5de9789558f6635438063ac80519cbb76afa9.scope: Deactivated successfully.
Nov 29 06:39:17 compute-0 podman[220394]: 2025-11-29 06:39:17.629715622 +0000 UTC m=+1.056141357 container died 62d81ce0d977f159029f326819d5de9789558f6635438063ac80519cbb76afa9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_archimedes, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True)
Nov 29 06:39:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-556c37e1b0fc497e8d465d666bf7dce3d111b53d8e4d6ae91da8663671bca2ef-merged.mount: Deactivated successfully.
Nov 29 06:39:17 compute-0 python3.9[220546]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 06:39:17 compute-0 podman[220394]: 2025-11-29 06:39:17.926691133 +0000 UTC m=+1.353116868 container remove 62d81ce0d977f159029f326819d5de9789558f6635438063ac80519cbb76afa9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_archimedes, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 29 06:39:17 compute-0 systemd[1]: libpod-conmon-62d81ce0d977f159029f326819d5de9789558f6635438063ac80519cbb76afa9.scope: Deactivated successfully.
Nov 29 06:39:17 compute-0 sudo[220256]: pam_unix(sudo:session): session closed for user root
Nov 29 06:39:17 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:39:17 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:39:17 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:39:17.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:39:18 compute-0 ceph-mon[74654]: pgmap v805: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:39:18 compute-0 sudo[220571]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:39:18 compute-0 sudo[220571]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:39:18 compute-0 sudo[220571]: pam_unix(sudo:session): session closed for user root
Nov 29 06:39:18 compute-0 sudo[220596]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:39:18 compute-0 sudo[220596]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:39:18 compute-0 sudo[220596]: pam_unix(sudo:session): session closed for user root
Nov 29 06:39:18 compute-0 sudo[220544]: pam_unix(sudo:session): session closed for user root
Nov 29 06:39:18 compute-0 sudo[220621]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:39:18 compute-0 sudo[220621]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:39:18 compute-0 sudo[220621]: pam_unix(sudo:session): session closed for user root
Nov 29 06:39:18 compute-0 sudo[220646]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -- raw list --format json
Nov 29 06:39:18 compute-0 sudo[220646]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:39:18 compute-0 podman[220740]: 2025-11-29 06:39:18.523425304 +0000 UTC m=+0.047355653 container create b089c79ad739b931e100b7be60e7fae5dca27a7273ad54a62bd097117aedf49c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_hamilton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 06:39:18 compute-0 systemd[1]: Started libpod-conmon-b089c79ad739b931e100b7be60e7fae5dca27a7273ad54a62bd097117aedf49c.scope.
Nov 29 06:39:18 compute-0 sudo[220800]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-azjktpoyayclxwkarhztngaisjghyrte ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398357.2166178-106-179855636243511/AnsiballZ_dnf.py'
Nov 29 06:39:18 compute-0 sudo[220800]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:39:18 compute-0 podman[220740]: 2025-11-29 06:39:18.500544106 +0000 UTC m=+0.024474535 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:39:18 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:39:18 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v806: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:39:18 compute-0 podman[220740]: 2025-11-29 06:39:18.797157117 +0000 UTC m=+0.321087506 container init b089c79ad739b931e100b7be60e7fae5dca27a7273ad54a62bd097117aedf49c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_hamilton, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 29 06:39:18 compute-0 podman[220740]: 2025-11-29 06:39:18.805462526 +0000 UTC m=+0.329392875 container start b089c79ad739b931e100b7be60e7fae5dca27a7273ad54a62bd097117aedf49c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_hamilton, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 06:39:18 compute-0 podman[220740]: 2025-11-29 06:39:18.808816212 +0000 UTC m=+0.332746611 container attach b089c79ad739b931e100b7be60e7fae5dca27a7273ad54a62bd097117aedf49c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_hamilton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 06:39:18 compute-0 crazy_hamilton[220802]: 167 167
Nov 29 06:39:18 compute-0 systemd[1]: libpod-b089c79ad739b931e100b7be60e7fae5dca27a7273ad54a62bd097117aedf49c.scope: Deactivated successfully.
Nov 29 06:39:18 compute-0 podman[220740]: 2025-11-29 06:39:18.81258952 +0000 UTC m=+0.336519879 container died b089c79ad739b931e100b7be60e7fae5dca27a7273ad54a62bd097117aedf49c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_hamilton, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 06:39:18 compute-0 python3.9[220805]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 06:39:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-dfd6f50ceb39d6170bd154b5a30e84ff2b74571ed9e42e97fea133d71107f5a4-merged.mount: Deactivated successfully.
Nov 29 06:39:19 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:39:19 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:39:19 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:39:19 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:39:19.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:39:19 compute-0 podman[220740]: 2025-11-29 06:39:19.468996329 +0000 UTC m=+0.992926678 container remove b089c79ad739b931e100b7be60e7fae5dca27a7273ad54a62bd097117aedf49c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_hamilton, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 06:39:19 compute-0 ceph-mon[74654]: pgmap v806: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:39:19 compute-0 systemd[1]: libpod-conmon-b089c79ad739b931e100b7be60e7fae5dca27a7273ad54a62bd097117aedf49c.scope: Deactivated successfully.
Nov 29 06:39:19 compute-0 podman[220831]: 2025-11-29 06:39:19.682792598 +0000 UTC m=+0.042574776 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:39:19 compute-0 podman[220831]: 2025-11-29 06:39:19.776102331 +0000 UTC m=+0.135884489 container create 9d87dfe1009a531785f98149320da08916d7fe20ff618ffe9a8cf4d97fe40550 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_montalcini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 29 06:39:19 compute-0 systemd[1]: Started libpod-conmon-9d87dfe1009a531785f98149320da08916d7fe20ff618ffe9a8cf4d97fe40550.scope.
Nov 29 06:39:19 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:39:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e86730046b2d7e4d8a5d91f1762dbe75bcdc39bac45dacb1bb0d1ed20d8fd8c2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 06:39:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e86730046b2d7e4d8a5d91f1762dbe75bcdc39bac45dacb1bb0d1ed20d8fd8c2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:39:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e86730046b2d7e4d8a5d91f1762dbe75bcdc39bac45dacb1bb0d1ed20d8fd8c2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:39:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e86730046b2d7e4d8a5d91f1762dbe75bcdc39bac45dacb1bb0d1ed20d8fd8c2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 06:39:19 compute-0 podman[220831]: 2025-11-29 06:39:19.875619014 +0000 UTC m=+0.235401222 container init 9d87dfe1009a531785f98149320da08916d7fe20ff618ffe9a8cf4d97fe40550 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_montalcini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 06:39:19 compute-0 podman[220831]: 2025-11-29 06:39:19.885372564 +0000 UTC m=+0.245154722 container start 9d87dfe1009a531785f98149320da08916d7fe20ff618ffe9a8cf4d97fe40550 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_montalcini, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 29 06:39:19 compute-0 podman[220831]: 2025-11-29 06:39:19.889443351 +0000 UTC m=+0.249225559 container attach 9d87dfe1009a531785f98149320da08916d7fe20ff618ffe9a8cf4d97fe40550 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_montalcini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 29 06:39:19 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:39:19 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:39:19 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:39:19.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:39:20 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v807: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:39:20 compute-0 practical_montalcini[220847]: {
Nov 29 06:39:20 compute-0 practical_montalcini[220847]:     "91f280f1-e534-4adc-bf70-98711580c2dd": {
Nov 29 06:39:20 compute-0 practical_montalcini[220847]:         "ceph_fsid": "336ec58c-893b-528f-a0c1-6ed1196bc047",
Nov 29 06:39:20 compute-0 practical_montalcini[220847]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 06:39:20 compute-0 practical_montalcini[220847]:         "osd_id": 1,
Nov 29 06:39:20 compute-0 practical_montalcini[220847]:         "osd_uuid": "91f280f1-e534-4adc-bf70-98711580c2dd",
Nov 29 06:39:20 compute-0 practical_montalcini[220847]:         "type": "bluestore"
Nov 29 06:39:20 compute-0 practical_montalcini[220847]:     }
Nov 29 06:39:20 compute-0 practical_montalcini[220847]: }
Nov 29 06:39:20 compute-0 systemd[1]: libpod-9d87dfe1009a531785f98149320da08916d7fe20ff618ffe9a8cf4d97fe40550.scope: Deactivated successfully.
Nov 29 06:39:20 compute-0 podman[220868]: 2025-11-29 06:39:20.905462192 +0000 UTC m=+0.112736123 container died 9d87dfe1009a531785f98149320da08916d7fe20ff618ffe9a8cf4d97fe40550 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_montalcini, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 29 06:39:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-e86730046b2d7e4d8a5d91f1762dbe75bcdc39bac45dacb1bb0d1ed20d8fd8c2-merged.mount: Deactivated successfully.
Nov 29 06:39:20 compute-0 podman[220868]: 2025-11-29 06:39:20.959247869 +0000 UTC m=+0.166521790 container remove 9d87dfe1009a531785f98149320da08916d7fe20ff618ffe9a8cf4d97fe40550 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_montalcini, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 06:39:20 compute-0 systemd[1]: libpod-conmon-9d87dfe1009a531785f98149320da08916d7fe20ff618ffe9a8cf4d97fe40550.scope: Deactivated successfully.
Nov 29 06:39:20 compute-0 sudo[220646]: pam_unix(sudo:session): session closed for user root
Nov 29 06:39:21 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 06:39:21 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:39:21 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 06:39:21 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:39:21 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev 62c182f2-b963-4012-9e6b-cef7e1de1a97 does not exist
Nov 29 06:39:21 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev b9f0f398-0675-4a5a-896a-d50d9dc046dc does not exist
Nov 29 06:39:21 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev a6eb3983-84e2-435e-aa31-6aa0ab4e034a does not exist
Nov 29 06:39:21 compute-0 sudo[220885]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:39:21 compute-0 sudo[220885]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:39:21 compute-0 sudo[220885]: pam_unix(sudo:session): session closed for user root
Nov 29 06:39:21 compute-0 sudo[220910]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 06:39:21 compute-0 sudo[220910]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:39:21 compute-0 sudo[220910]: pam_unix(sudo:session): session closed for user root
Nov 29 06:39:21 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:39:21 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:39:21 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:39:21.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:39:21 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:39:21 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:39:21 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:39:21.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:39:22 compute-0 ceph-mon[74654]: pgmap v807: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:39:22 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:39:22 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:39:22 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v808: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:39:23 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:39:23 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:39:23 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:39:23.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:39:23 compute-0 ceph-mon[74654]: pgmap v808: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:39:23 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:39:23 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:39:23 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:39:23.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:39:24 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:39:24 compute-0 sudo[220800]: pam_unix(sudo:session): session closed for user root
Nov 29 06:39:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:39:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:39:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:39:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:39:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:39:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:39:24 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v809: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:39:24 compute-0 sudo[221085]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nekgxgyhwduhuaqskspuwfvysyttcpjj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398364.4571142-142-41928652090764/AnsiballZ_stat.py'
Nov 29 06:39:24 compute-0 sudo[221085]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:39:25 compute-0 python3.9[221088]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 06:39:25 compute-0 sudo[221085]: pam_unix(sudo:session): session closed for user root
Nov 29 06:39:25 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:39:25 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:39:25 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:39:25.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:39:25 compute-0 ceph-mon[74654]: pgmap v809: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:39:25 compute-0 sudo[221238]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-twoqmvmlqdsuahmorhteqdcggvxawmlv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398365.4613185-172-10825379309129/AnsiballZ_command.py'
Nov 29 06:39:25 compute-0 sudo[221238]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:39:25 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:39:25 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:39:25 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:39:25.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:39:26 compute-0 python3.9[221240]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:39:26 compute-0 sudo[221238]: pam_unix(sudo:session): session closed for user root
Nov 29 06:39:26 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v810: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:39:26 compute-0 sudo[221391]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bzdeivqlmhubrhmtyydibqecivytxmna ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398366.516178-202-280598200224540/AnsiballZ_stat.py'
Nov 29 06:39:26 compute-0 sudo[221391]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:39:27 compute-0 python3.9[221393]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 06:39:27 compute-0 sudo[221391]: pam_unix(sudo:session): session closed for user root
Nov 29 06:39:27 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:39:27 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:39:27 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:39:27.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:39:27 compute-0 sudo[221544]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dqrorwjbvazloaqtwdthhcvthrucyrbn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398367.1813214-226-248043457537173/AnsiballZ_command.py'
Nov 29 06:39:27 compute-0 sudo[221544]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:39:27 compute-0 python3.9[221546]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/iscsi-iname _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:39:27 compute-0 sudo[221544]: pam_unix(sudo:session): session closed for user root
Nov 29 06:39:27 compute-0 ceph-mon[74654]: pgmap v810: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:39:27 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:39:27 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:39:27 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:39:27.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:39:28 compute-0 sudo[221697]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-olqjnisjvewfhmtqflqinlhayosxwuyc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398367.927275-250-113594569056087/AnsiballZ_stat.py'
Nov 29 06:39:28 compute-0 sudo[221697]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:39:28 compute-0 python3.9[221699]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:39:28 compute-0 sudo[221697]: pam_unix(sudo:session): session closed for user root
Nov 29 06:39:28 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v811: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:39:28 compute-0 sudo[221821]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cmseemiwdxqhcfxkqjgfeunapqogmljd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398367.927275-250-113594569056087/AnsiballZ_copy.py'
Nov 29 06:39:28 compute-0 sudo[221821]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:39:29 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:39:29 compute-0 ceph-mon[74654]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #30. Immutable memtables: 0.
Nov 29 06:39:29 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:39:29.065545) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 06:39:29 compute-0 ceph-mon[74654]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 30
Nov 29 06:39:29 compute-0 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764398369065657, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 1352, "num_deletes": 250, "total_data_size": 2476152, "memory_usage": 2503320, "flush_reason": "Manual Compaction"}
Nov 29 06:39:29 compute-0 ceph-mon[74654]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #31: started
Nov 29 06:39:29 compute-0 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764398369080089, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 31, "file_size": 1461897, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13139, "largest_seqno": 14490, "table_properties": {"data_size": 1457009, "index_size": 2284, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 12208, "raw_average_key_size": 20, "raw_value_size": 1446470, "raw_average_value_size": 2402, "num_data_blocks": 103, "num_entries": 602, "num_filter_entries": 602, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764398218, "oldest_key_time": 1764398218, "file_creation_time": 1764398369, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cb6c8f8f-b3b4-4901-9b8e-6f9d7b0da908", "db_session_id": "VL4WOW4AK06DDHF5VQBP", "orig_file_number": 31, "seqno_to_time_mapping": "N/A"}}
Nov 29 06:39:29 compute-0 ceph-mon[74654]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 14855 microseconds, and 7919 cpu microseconds.
Nov 29 06:39:29 compute-0 ceph-mon[74654]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 06:39:29 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:39:29.080400) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #31: 1461897 bytes OK
Nov 29 06:39:29 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:39:29.080519) [db/memtable_list.cc:519] [default] Level-0 commit table #31 started
Nov 29 06:39:29 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:39:29.082785) [db/memtable_list.cc:722] [default] Level-0 commit table #31: memtable #1 done
Nov 29 06:39:29 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:39:29.082808) EVENT_LOG_v1 {"time_micros": 1764398369082800, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 06:39:29 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:39:29.082829) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 06:39:29 compute-0 ceph-mon[74654]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 2470278, prev total WAL file size 2470278, number of live WAL files 2.
Nov 29 06:39:29 compute-0 ceph-mon[74654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000027.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 06:39:29 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:39:29.084721) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323533' seq:72057594037927935, type:22 .. '6D67727374617400353034' seq:0, type:0; will stop at (end)
Nov 29 06:39:29 compute-0 ceph-mon[74654]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 06:39:29 compute-0 ceph-mon[74654]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [31(1427KB)], [29(10MB)]
Nov 29 06:39:29 compute-0 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764398369084808, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [31], "files_L6": [29], "score": -1, "input_data_size": 12071523, "oldest_snapshot_seqno": -1}
Nov 29 06:39:29 compute-0 ceph-mon[74654]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #32: 4610 keys, 9170082 bytes, temperature: kUnknown
Nov 29 06:39:29 compute-0 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764398369140596, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 32, "file_size": 9170082, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9136987, "index_size": 20441, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11589, "raw_key_size": 112467, "raw_average_key_size": 24, "raw_value_size": 9051467, "raw_average_value_size": 1963, "num_data_blocks": 883, "num_entries": 4610, "num_filter_entries": 4610, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764396963, "oldest_key_time": 0, "file_creation_time": 1764398369, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cb6c8f8f-b3b4-4901-9b8e-6f9d7b0da908", "db_session_id": "VL4WOW4AK06DDHF5VQBP", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Nov 29 06:39:29 compute-0 ceph-mon[74654]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 06:39:29 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:39:29.140868) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 9170082 bytes
Nov 29 06:39:29 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:39:29.142247) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 216.0 rd, 164.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 10.1 +0.0 blob) out(8.7 +0.0 blob), read-write-amplify(14.5) write-amplify(6.3) OK, records in: 5063, records dropped: 453 output_compression: NoCompression
Nov 29 06:39:29 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:39:29.142263) EVENT_LOG_v1 {"time_micros": 1764398369142256, "job": 12, "event": "compaction_finished", "compaction_time_micros": 55897, "compaction_time_cpu_micros": 20121, "output_level": 6, "num_output_files": 1, "total_output_size": 9170082, "num_input_records": 5063, "num_output_records": 4610, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 06:39:29 compute-0 ceph-mon[74654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000031.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 06:39:29 compute-0 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764398369142559, "job": 12, "event": "table_file_deletion", "file_number": 31}
Nov 29 06:39:29 compute-0 ceph-mon[74654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 06:39:29 compute-0 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764398369144089, "job": 12, "event": "table_file_deletion", "file_number": 29}
Nov 29 06:39:29 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:39:29.084601) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 06:39:29 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:39:29.144130) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 06:39:29 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:39:29.144136) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 06:39:29 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:39:29.144138) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 06:39:29 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:39:29.144140) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 06:39:29 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:39:29.144142) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 06:39:29 compute-0 python3.9[221823]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764398367.927275-250-113594569056087/.source.iscsi _original_basename=.97aetkex follow=False checksum=91783c1b2b0f473e0aa10089b38d8c6438a20bbb backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:39:29 compute-0 sudo[221821]: pam_unix(sudo:session): session closed for user root
Nov 29 06:39:29 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:39:29 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:39:29 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:39:29.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:39:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 06:39:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 06:39:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 06:39:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 06:39:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 06:39:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 06:39:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 06:39:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 06:39:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 06:39:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 06:39:29 compute-0 sudo[221900]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:39:29 compute-0 sudo[221900]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:39:29 compute-0 sudo[221900]: pam_unix(sudo:session): session closed for user root
Nov 29 06:39:29 compute-0 sudo[221925]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:39:29 compute-0 sudo[221925]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:39:29 compute-0 sudo[221925]: pam_unix(sudo:session): session closed for user root
Nov 29 06:39:29 compute-0 ceph-mon[74654]: pgmap v811: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:39:29 compute-0 sudo[222023]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nrwhxxoufyhsyhlmgobiclwxhhgezffr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398369.4798057-295-226973024233853/AnsiballZ_file.py'
Nov 29 06:39:29 compute-0 sudo[222023]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:39:29 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:39:29 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:39:29 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:39:29.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:39:30 compute-0 python3.9[222025]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:39:30 compute-0 sudo[222023]: pam_unix(sudo:session): session closed for user root
Nov 29 06:39:30 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v812: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:39:30 compute-0 sudo[222175]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jjonzibvecyigqjmxeygdzvexgxqdpta ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398370.4156313-319-70754642465092/AnsiballZ_lineinfile.py'
Nov 29 06:39:30 compute-0 sudo[222175]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:39:31 compute-0 python3.9[222177]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:39:31 compute-0 sudo[222175]: pam_unix(sudo:session): session closed for user root
Nov 29 06:39:31 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:39:31 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:39:31 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:39:31.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:39:31 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:39:31 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:39:31 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:39:31.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:39:32 compute-0 ceph-mon[74654]: pgmap v812: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:39:32 compute-0 sudo[222328]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pciwacvcmpogpjdiewafojdzrkxpewhe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398371.5201178-346-238638269848183/AnsiballZ_systemd_service.py'
Nov 29 06:39:32 compute-0 sudo[222328]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:39:32 compute-0 python3.9[222330]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 06:39:32 compute-0 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Nov 29 06:39:32 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v813: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:39:32 compute-0 sudo[222328]: pam_unix(sudo:session): session closed for user root
Nov 29 06:39:33 compute-0 ceph-mon[74654]: pgmap v813: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:39:33 compute-0 sudo[222487]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mesfrredxiyskqiifluhfayanodfqscr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398372.8617373-370-47608540181992/AnsiballZ_systemd_service.py'
Nov 29 06:39:33 compute-0 sudo[222487]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:39:33 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:39:33 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:39:33 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:39:33.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:39:33 compute-0 python3.9[222489]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 06:39:33 compute-0 systemd[1]: Reloading.
Nov 29 06:39:33 compute-0 systemd-rc-local-generator[222518]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 06:39:33 compute-0 systemd-sysv-generator[222521]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 06:39:33 compute-0 sshd-session[222394]: Received disconnect from 176.109.67.96 port 34412:11: Bye Bye [preauth]
Nov 29 06:39:33 compute-0 sshd-session[222394]: Disconnected from authenticating user root 176.109.67.96 port 34412 [preauth]
Nov 29 06:39:34 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:39:34 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:39:34 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:39:34.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:39:34 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:39:34 compute-0 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Nov 29 06:39:34 compute-0 systemd[1]: Starting Open-iSCSI...
Nov 29 06:39:34 compute-0 kernel: Loading iSCSI transport class v2.0-870.
Nov 29 06:39:34 compute-0 systemd[1]: Started Open-iSCSI.
Nov 29 06:39:34 compute-0 systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Nov 29 06:39:34 compute-0 systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Nov 29 06:39:34 compute-0 sudo[222487]: pam_unix(sudo:session): session closed for user root
Nov 29 06:39:34 compute-0 podman[222527]: 2025-11-29 06:39:34.501923049 +0000 UTC m=+0.090142483 container health_status 81ea2bcb89266a0110a379c2083d8cc042460d4a35c8ed3bf349dd1083925000 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 29 06:39:34 compute-0 podman[222528]: 2025-11-29 06:39:34.526726412 +0000 UTC m=+0.114051041 container health_status b3f42e9a710907b47913576d27471d163da731262c1464357cff24681ce600c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=ovn_controller)
Nov 29 06:39:34 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v814: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:39:35 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:39:35 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:39:35 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:39:35.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:39:35 compute-0 sudo[222727]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jbxjmfdndvgpbslbzecvlntuedcyfhaz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398375.1105478-403-205409674386186/AnsiballZ_service_facts.py'
Nov 29 06:39:35 compute-0 sudo[222727]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:39:35 compute-0 python3.9[222729]: ansible-ansible.builtin.service_facts Invoked
Nov 29 06:39:35 compute-0 network[222746]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 29 06:39:35 compute-0 network[222747]: 'network-scripts' will be removed from distribution in near future.
Nov 29 06:39:35 compute-0 network[222748]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 29 06:39:36 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:39:36 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:39:36 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:39:36.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:39:36 compute-0 ceph-mon[74654]: pgmap v814: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:39:36 compute-0 sshd-session[222754]: Received disconnect from 162.214.92.14 port 54772:11: Bye Bye [preauth]
Nov 29 06:39:36 compute-0 sshd-session[222754]: Disconnected from authenticating user root 162.214.92.14 port 54772 [preauth]
Nov 29 06:39:36 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v815: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:39:37 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:39:37 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:39:37 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:39:37.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:39:37 compute-0 ceph-mon[74654]: pgmap v815: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:39:38 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:39:38 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:39:38 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:39:38.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:39:38 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v816: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:39:39 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:39:39 compute-0 sudo[222727]: pam_unix(sudo:session): session closed for user root
Nov 29 06:39:39 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:39:39 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:39:39 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:39:39.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:39:39 compute-0 ceph-mon[74654]: pgmap v816: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:39:40 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:39:40 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:39:40 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:39:40.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:39:40 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v817: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:39:40 compute-0 sudo[223022]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bexhcelaplaayijjwtqfhkaydtnffsdx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398380.4105253-433-25259883611781/AnsiballZ_file.py'
Nov 29 06:39:40 compute-0 sudo[223022]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:39:40 compute-0 python3.9[223024]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 29 06:39:40 compute-0 sudo[223022]: pam_unix(sudo:session): session closed for user root
Nov 29 06:39:41 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:39:41 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:39:41 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:39:41.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:39:41 compute-0 ceph-mon[74654]: pgmap v817: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:39:42 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:39:42 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:39:42 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:39:42.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:39:42 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v818: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:39:42 compute-0 sshd-session[223025]: Received disconnect from 27.112.78.245 port 46816:11: Bye Bye [preauth]
Nov 29 06:39:42 compute-0 sshd-session[223025]: Disconnected from authenticating user root 27.112.78.245 port 46816 [preauth]
Nov 29 06:39:43 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:39:43 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:39:43 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:39:43.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:39:43 compute-0 ceph-mon[74654]: pgmap v818: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:39:44 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:39:44 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:39:44 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:39:44 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:39:44.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:39:44 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v819: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:39:45 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:39:45 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:39:45 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:39:45.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:39:45 compute-0 sudo[223179]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gxwqwzmlptrjrsvqaxcxkppxtsbfqvcb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398384.567442-457-28404057554016/AnsiballZ_modprobe.py'
Nov 29 06:39:45 compute-0 sudo[223179]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:39:45 compute-0 python3.9[223181]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Nov 29 06:39:45 compute-0 sudo[223179]: pam_unix(sudo:session): session closed for user root
Nov 29 06:39:46 compute-0 ceph-mon[74654]: pgmap v819: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:39:46 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:39:46 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:39:46 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:39:46.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:39:46 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v820: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:39:46 compute-0 sudo[223335]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zrvqxyyxqmmylvxiecuzbcuutjuswqtl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398386.432603-481-16335278378920/AnsiballZ_stat.py'
Nov 29 06:39:46 compute-0 sudo[223335]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:39:47 compute-0 python3.9[223337]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:39:47 compute-0 sudo[223335]: pam_unix(sudo:session): session closed for user root
Nov 29 06:39:47 compute-0 ceph-mon[74654]: pgmap v820: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:39:47 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:39:47 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:39:47 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:39:47.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:39:47 compute-0 sudo[223461]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jgagwyxxfmyglyiuabhdwdvbmduxrsgn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398386.432603-481-16335278378920/AnsiballZ_copy.py'
Nov 29 06:39:47 compute-0 sudo[223461]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:39:47 compute-0 python3.9[223463]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764398386.432603-481-16335278378920/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:39:47 compute-0 sudo[223461]: pam_unix(sudo:session): session closed for user root
Nov 29 06:39:48 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:39:48 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:39:48 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:39:48.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:39:48 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v821: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:39:48 compute-0 sshd-session[223441]: Received disconnect from 49.247.35.31 port 47053:11: Bye Bye [preauth]
Nov 29 06:39:48 compute-0 sshd-session[223441]: Disconnected from authenticating user root 49.247.35.31 port 47053 [preauth]
Nov 29 06:39:49 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:39:49 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:39:49 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:39:49 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:39:49.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:39:49 compute-0 ceph-mon[74654]: pgmap v821: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:39:49 compute-0 sudo[223588]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:39:49 compute-0 sudo[223588]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:39:49 compute-0 sudo[223588]: pam_unix(sudo:session): session closed for user root
Nov 29 06:39:49 compute-0 sudo[223639]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bzazglcgtshzevblrhtvgsmjuswjqcdj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398389.3873687-529-263944358933532/AnsiballZ_lineinfile.py'
Nov 29 06:39:49 compute-0 sudo[223639]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:39:49 compute-0 sudo[223640]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:39:49 compute-0 sudo[223640]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:39:49 compute-0 sudo[223640]: pam_unix(sudo:session): session closed for user root
Nov 29 06:39:49 compute-0 python3.9[223642]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:39:50 compute-0 sudo[223639]: pam_unix(sudo:session): session closed for user root
Nov 29 06:39:50 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:39:50 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:39:50 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:39:50.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:39:50 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v822: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:39:51 compute-0 sshd-session[223743]: Invalid user mysql from 103.143.238.173 port 58018
Nov 29 06:39:51 compute-0 sshd-session[223743]: Received disconnect from 103.143.238.173 port 58018:11: Bye Bye [preauth]
Nov 29 06:39:51 compute-0 sshd-session[223743]: Disconnected from invalid user mysql 103.143.238.173 port 58018 [preauth]
Nov 29 06:39:51 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:39:51 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:39:51 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:39:51.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:39:51 compute-0 ceph-mon[74654]: pgmap v822: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:39:52 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:39:52 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:39:52 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:39:52.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:39:52 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v823: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:39:53 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:39:53 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:39:53 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:39:53.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:39:53 compute-0 ceph-mon[74654]: pgmap v823: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:39:54 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:39:54 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:39:54 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:39:54 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:39:54.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:39:54 compute-0 ceph-mgr[74948]: [balancer INFO root] Optimize plan auto_2025-11-29_06:39:54
Nov 29 06:39:54 compute-0 ceph-mgr[74948]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 06:39:54 compute-0 ceph-mgr[74948]: [balancer INFO root] do_upmap
Nov 29 06:39:54 compute-0 ceph-mgr[74948]: [balancer INFO root] pools ['default.rgw.control', 'volumes', 'backups', '.rgw.root', 'default.rgw.meta', 'images', 'default.rgw.log', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'vms', '.mgr']
Nov 29 06:39:54 compute-0 ceph-mgr[74948]: [balancer INFO root] prepared 0/10 changes
Nov 29 06:39:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:39:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:39:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:39:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:39:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:39:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:39:54 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v824: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:39:55 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:39:55 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:39:55 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:39:55.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:39:56 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:39:56 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:39:56 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:39:56.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:39:56 compute-0 ceph-mon[74654]: pgmap v824: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:39:56 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v825: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:39:57 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:39:57 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:39:57 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:39:57.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:39:57 compute-0 sudo[223823]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fctbwuvioihhzgfeeutwmikzydjesayv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398390.3864527-553-153197579210576/AnsiballZ_systemd.py'
Nov 29 06:39:57 compute-0 sudo[223823]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:39:57 compute-0 ceph-mon[74654]: pgmap v825: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:39:57 compute-0 python3.9[223825]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 06:39:57 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Nov 29 06:39:57 compute-0 systemd[1]: Stopped Load Kernel Modules.
Nov 29 06:39:57 compute-0 systemd[1]: Stopping Load Kernel Modules...
Nov 29 06:39:57 compute-0 systemd[1]: Starting Load Kernel Modules...
Nov 29 06:39:57 compute-0 systemd[1]: Finished Load Kernel Modules.
Nov 29 06:39:57 compute-0 sudo[223823]: pam_unix(sudo:session): session closed for user root
Nov 29 06:39:58 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:39:58 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:39:58 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:39:58.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:39:58 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v826: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:39:58 compute-0 sudo[223979]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ozikaeeehvsflkhocrwcvhaexegbzyri ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398398.4130244-577-280461036494937/AnsiballZ_file.py'
Nov 29 06:39:58 compute-0 sudo[223979]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:39:58 compute-0 python3.9[223981]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 06:39:58 compute-0 sudo[223979]: pam_unix(sudo:session): session closed for user root
Nov 29 06:39:59 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:39:59 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:39:59 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:39:59 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:39:59.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:40:00 compute-0 ceph-mon[74654]: log_channel(cluster) log [INF] : overall HEALTH_OK
Nov 29 06:40:00 compute-0 ceph-mon[74654]: pgmap v826: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:40:00 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:40:00 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:40:00 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:40:00.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:40:00 compute-0 sudo[224134]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wqldmxkhnppzejqpredbhsrvisjmovlr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398399.8975239-604-181289135298866/AnsiballZ_stat.py'
Nov 29 06:40:00 compute-0 sudo[224134]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:40:00 compute-0 python3.9[224136]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 06:40:00 compute-0 sudo[224134]: pam_unix(sudo:session): session closed for user root
Nov 29 06:40:00 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v827: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:40:00 compute-0 sshd-session[224058]: Invalid user ubuntu from 197.13.24.157 port 44278
Nov 29 06:40:00 compute-0 sshd-session[224058]: Received disconnect from 197.13.24.157 port 44278:11: Bye Bye [preauth]
Nov 29 06:40:00 compute-0 sshd-session[224058]: Disconnected from invalid user ubuntu 197.13.24.157 port 44278 [preauth]
Nov 29 06:40:01 compute-0 sudo[224289]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uikznvfigpkgjklvhubprvmttnycuydr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398400.7501347-631-100990805897947/AnsiballZ_stat.py'
Nov 29 06:40:01 compute-0 sudo[224289]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:40:01 compute-0 ceph-mon[74654]: overall HEALTH_OK
Nov 29 06:40:01 compute-0 ceph-mon[74654]: pgmap v827: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:40:01 compute-0 python3.9[224291]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 06:40:01 compute-0 sudo[224289]: pam_unix(sudo:session): session closed for user root
Nov 29 06:40:01 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:40:01 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:40:01 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:40:01.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:40:01 compute-0 sshd-session[224137]: Received disconnect from 34.92.81.41 port 35336:11: Bye Bye [preauth]
Nov 29 06:40:01 compute-0 sshd-session[224137]: Disconnected from authenticating user root 34.92.81.41 port 35336 [preauth]
Nov 29 06:40:02 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:40:02 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:40:02 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:40:02.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:40:02 compute-0 sudo[224441]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-imhalfatgnthmadqbbkqbnkyhmpiqgls ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398401.477998-655-188314749192914/AnsiballZ_stat.py'
Nov 29 06:40:02 compute-0 sudo[224441]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:40:02 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v828: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:40:02 compute-0 python3.9[224443]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:40:02 compute-0 sudo[224441]: pam_unix(sudo:session): session closed for user root
Nov 29 06:40:03 compute-0 sudo[224565]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwtucbtnxiontfgmebsusbzorlgbaspr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398401.477998-655-188314749192914/AnsiballZ_copy.py'
Nov 29 06:40:03 compute-0 sudo[224565]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:40:03 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:40:03 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:40:03 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:40:03.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:40:03 compute-0 python3.9[224567]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764398401.477998-655-188314749192914/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:40:03 compute-0 sudo[224565]: pam_unix(sudo:session): session closed for user root
Nov 29 06:40:03 compute-0 ceph-mon[74654]: pgmap v828: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:40:04 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:40:04 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:40:04 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:40:04 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:40:04.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:40:04 compute-0 sudo[224717]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yeofualujgyeozawytsbftocthlgskfd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398403.8399525-700-19037712870425/AnsiballZ_command.py'
Nov 29 06:40:04 compute-0 sudo[224717]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:40:04 compute-0 python3.9[224719]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:40:04 compute-0 sudo[224717]: pam_unix(sudo:session): session closed for user root
Nov 29 06:40:04 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v829: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:40:05 compute-0 sudo[224892]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lcpduozpdglqwsxcwkhnvbuhnprxsvaj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398404.6934698-724-183900223424993/AnsiballZ_lineinfile.py'
Nov 29 06:40:05 compute-0 sudo[224892]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:40:05 compute-0 podman[224845]: 2025-11-29 06:40:05.029089116 +0000 UTC m=+0.060686466 container health_status 81ea2bcb89266a0110a379c2083d8cc042460d4a35c8ed3bf349dd1083925000 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 06:40:05 compute-0 podman[224846]: 2025-11-29 06:40:05.060022346 +0000 UTC m=+0.086196320 container health_status b3f42e9a710907b47913576d27471d163da731262c1464357cff24681ce600c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 29 06:40:05 compute-0 python3.9[224909]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:40:05 compute-0 sudo[224892]: pam_unix(sudo:session): session closed for user root
Nov 29 06:40:05 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:40:05 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:40:05 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:40:05.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:40:06 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:40:06 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:40:06 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:40:06.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:40:06 compute-0 sudo[225066]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uyzdqchaftbodporvabcnoppzvmkqggf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398405.4701016-748-118098223194635/AnsiballZ_replace.py'
Nov 29 06:40:06 compute-0 sudo[225066]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:40:06 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v830: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:40:06 compute-0 python3.9[225068]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:40:06 compute-0 sudo[225066]: pam_unix(sudo:session): session closed for user root
Nov 29 06:40:06 compute-0 ceph-mon[74654]: pgmap v829: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:40:07 compute-0 sshd-session[223771]: error: kex_exchange_identification: read: Connection timed out
Nov 29 06:40:07 compute-0 sshd-session[223771]: banner exchange: Connection from 58.210.98.130 port 45163: Connection timed out
Nov 29 06:40:07 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:40:07 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:40:07 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:40:07.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:40:07 compute-0 sudo[225221]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bfkovkaabdpmoynncwgihzetvtqiyvjo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398407.0151145-772-171884325381729/AnsiballZ_replace.py'
Nov 29 06:40:07 compute-0 sudo[225221]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:40:07 compute-0 python3.9[225223]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:40:07 compute-0 sudo[225221]: pam_unix(sudo:session): session closed for user root
Nov 29 06:40:07 compute-0 sshd-session[225146]: Invalid user autcom from 31.6.212.12 port 41956
Nov 29 06:40:07 compute-0 sshd-session[225146]: Received disconnect from 31.6.212.12 port 41956:11: Bye Bye [preauth]
Nov 29 06:40:07 compute-0 sshd-session[225146]: Disconnected from invalid user autcom 31.6.212.12 port 41956 [preauth]
Nov 29 06:40:08 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:40:08 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:40:08 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:40:08.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:40:08 compute-0 sudo[225373]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-avzmoyejjeibruiosvwfteobuhrbnwhp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398407.9269269-799-12009101963750/AnsiballZ_lineinfile.py'
Nov 29 06:40:08 compute-0 sudo[225373]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:40:08 compute-0 python3.9[225375]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:40:08 compute-0 sudo[225373]: pam_unix(sudo:session): session closed for user root
Nov 29 06:40:08 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v831: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:40:08 compute-0 ceph-mon[74654]: pgmap v830: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:40:09 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:40:09 compute-0 sudo[225526]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vieevhhezmnaltktbvicvraxfvhzqryg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398408.6924632-799-245324683791074/AnsiballZ_lineinfile.py'
Nov 29 06:40:09 compute-0 sudo[225526]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:40:09 compute-0 python3.9[225528]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:40:09 compute-0 sudo[225526]: pam_unix(sudo:session): session closed for user root
Nov 29 06:40:09 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:40:09 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:40:09 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:40:09.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:40:09 compute-0 sudo[225605]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:40:09 compute-0 sudo[225605]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:40:09 compute-0 sudo[225605]: pam_unix(sudo:session): session closed for user root
Nov 29 06:40:10 compute-0 sudo[225653]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:40:10 compute-0 sudo[225653]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:40:10 compute-0 sudo[225653]: pam_unix(sudo:session): session closed for user root
Nov 29 06:40:10 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:40:10 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:40:10 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:40:10.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:40:10 compute-0 ceph-mon[74654]: pgmap v831: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:40:10 compute-0 sudo[225728]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-njuicrgrrvnwpsjpaontqfsxcetrmhdz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398409.7500236-799-36413398089377/AnsiballZ_lineinfile.py'
Nov 29 06:40:10 compute-0 sudo[225728]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:40:10 compute-0 python3.9[225730]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:40:10 compute-0 sudo[225728]: pam_unix(sudo:session): session closed for user root
Nov 29 06:40:10 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v832: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:40:11 compute-0 sudo[225881]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjgoesrczbgpkckhgktexlbxzmmlkflc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398410.720047-799-246522797071775/AnsiballZ_lineinfile.py'
Nov 29 06:40:11 compute-0 sudo[225881]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:40:11 compute-0 python3.9[225883]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:40:11 compute-0 sudo[225881]: pam_unix(sudo:session): session closed for user root
Nov 29 06:40:11 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:40:11 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:40:11 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:40:11.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:40:11 compute-0 ceph-mon[74654]: pgmap v832: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:40:11 compute-0 sudo[226033]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lcdfwtvscfpdvldgyolneyqghwfoxoqs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398411.5407422-886-141818380290051/AnsiballZ_stat.py'
Nov 29 06:40:11 compute-0 sudo[226033]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:40:12 compute-0 python3.9[226035]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 06:40:12 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:40:12 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:40:12 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:40:12.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:40:12 compute-0 sudo[226033]: pam_unix(sudo:session): session closed for user root
Nov 29 06:40:12 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v833: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:40:12 compute-0 sudo[226187]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bzokhwfchzwmpurzrtysbuacawkwolup ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398412.334183-910-108498495232481/AnsiballZ_file.py'
Nov 29 06:40:12 compute-0 sudo[226187]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:40:12 compute-0 python3.9[226189]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/multipath/.multipath_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:40:12 compute-0 sudo[226187]: pam_unix(sudo:session): session closed for user root
Nov 29 06:40:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 06:40:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:40:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 06:40:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:40:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:40:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:40:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:40:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:40:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:40:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:40:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:40:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:40:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 06:40:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:40:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:40:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:40:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Nov 29 06:40:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:40:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 06:40:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:40:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:40:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:40:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 06:40:13 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:40:13 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:40:13 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:40:13.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:40:13 compute-0 ceph-mon[74654]: pgmap v833: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:40:13 compute-0 sudo[226340]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ldzttrgptosrfasktouoljqrjhugixkg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398413.5207562-937-52598329157336/AnsiballZ_file.py'
Nov 29 06:40:13 compute-0 sudo[226340]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:40:14 compute-0 python3.9[226342]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 06:40:14 compute-0 sudo[226340]: pam_unix(sudo:session): session closed for user root
Nov 29 06:40:14 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:40:14 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:40:14 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:40:14 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:40:14.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:40:14 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v834: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:40:14 compute-0 sudo[226492]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rnatvfyobdynqfgtxcpdwyxplnhbntas ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398414.3514423-961-99549215830034/AnsiballZ_stat.py'
Nov 29 06:40:14 compute-0 sudo[226492]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:40:14 compute-0 python3.9[226494]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:40:14 compute-0 sudo[226492]: pam_unix(sudo:session): session closed for user root
Nov 29 06:40:15 compute-0 sudo[226571]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jofbsiizyubfyailzbqzcazpnwfqwkna ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398414.3514423-961-99549215830034/AnsiballZ_file.py'
Nov 29 06:40:15 compute-0 sudo[226571]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:40:15 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:40:15 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:40:15 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:40:15.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:40:15 compute-0 python3.9[226573]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 06:40:15 compute-0 sudo[226571]: pam_unix(sudo:session): session closed for user root
Nov 29 06:40:16 compute-0 sudo[226723]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iywkkivmkaukrmarxkkegqkbinosngdp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398415.6722643-961-69961907157249/AnsiballZ_stat.py'
Nov 29 06:40:16 compute-0 sudo[226723]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:40:16 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:40:16 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:40:16 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:40:16.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:40:16 compute-0 python3.9[226725]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:40:16 compute-0 sudo[226723]: pam_unix(sudo:session): session closed for user root
Nov 29 06:40:16 compute-0 ceph-mon[74654]: pgmap v834: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:40:16 compute-0 sudo[226801]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dnrwmvoaxmqqwlplblwboarwphssszqa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398415.6722643-961-69961907157249/AnsiballZ_file.py'
Nov 29 06:40:16 compute-0 sudo[226801]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:40:16 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v835: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:40:16 compute-0 python3.9[226803]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 06:40:16 compute-0 sudo[226801]: pam_unix(sudo:session): session closed for user root
Nov 29 06:40:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:40:17.227 157767 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 06:40:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:40:17.229 157767 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 06:40:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:40:17.229 157767 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 06:40:17 compute-0 sudo[226954]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ptsfgyuzxlxemguwclbrqezjrnfbtdrm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398416.9918451-1030-54924935552598/AnsiballZ_file.py'
Nov 29 06:40:17 compute-0 sudo[226954]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:40:17 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:40:17 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:40:17 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:40:17.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:40:17 compute-0 python3.9[226956]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:40:17 compute-0 sudo[226954]: pam_unix(sudo:session): session closed for user root
Nov 29 06:40:17 compute-0 ceph-mon[74654]: pgmap v835: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:40:18 compute-0 systemd[1]: virtnodedevd.service: Deactivated successfully.
Nov 29 06:40:18 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:40:18 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:40:18 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:40:18.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:40:18 compute-0 sudo[227107]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ifdboqijvegypssldpztpxbwikjgejkb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398417.930266-1054-270494807490365/AnsiballZ_stat.py'
Nov 29 06:40:18 compute-0 sudo[227107]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:40:18 compute-0 python3.9[227109]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:40:18 compute-0 sudo[227107]: pam_unix(sudo:session): session closed for user root
Nov 29 06:40:18 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v836: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:40:18 compute-0 sudo[227185]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xwyqmuyudoeixttxqrlpvwcglqceaags ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398417.930266-1054-270494807490365/AnsiballZ_file.py'
Nov 29 06:40:18 compute-0 sudo[227185]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:40:19 compute-0 python3.9[227187]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:40:19 compute-0 sudo[227185]: pam_unix(sudo:session): session closed for user root
Nov 29 06:40:19 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:40:19 compute-0 ceph-mon[74654]: pgmap v836: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:40:19 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:40:19 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:40:19 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:40:19.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:40:19 compute-0 sudo[227338]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ppyketkesnvccabpgwtbqahizybtzoua ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398419.294941-1090-235888644070646/AnsiballZ_stat.py'
Nov 29 06:40:19 compute-0 sudo[227338]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:40:19 compute-0 python3.9[227340]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:40:19 compute-0 sudo[227338]: pam_unix(sudo:session): session closed for user root
Nov 29 06:40:20 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:40:20 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:40:20 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:40:20.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:40:20 compute-0 sudo[227416]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdhbexbbxrdniioywucfwuzbsnbtcaic ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398419.294941-1090-235888644070646/AnsiballZ_file.py'
Nov 29 06:40:20 compute-0 sudo[227416]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:40:20 compute-0 python3.9[227418]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:40:20 compute-0 sudo[227416]: pam_unix(sudo:session): session closed for user root
Nov 29 06:40:20 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Nov 29 06:40:20 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v837: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:40:20 compute-0 sudo[227572]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hjmahdidxvriksqcbsoaseioqvitvnae ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398420.5832646-1126-124704632722543/AnsiballZ_systemd.py'
Nov 29 06:40:20 compute-0 sudo[227572]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:40:21 compute-0 python3.9[227574]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 06:40:21 compute-0 systemd[1]: Reloading.
Nov 29 06:40:21 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:40:21 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:40:21 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:40:21.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:40:21 compute-0 systemd-sysv-generator[227605]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 06:40:21 compute-0 systemd-rc-local-generator[227600]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 06:40:21 compute-0 ceph-mon[74654]: pgmap v837: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:40:21 compute-0 sudo[227580]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:40:21 compute-0 sudo[227580]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:40:21 compute-0 sudo[227580]: pam_unix(sudo:session): session closed for user root
Nov 29 06:40:21 compute-0 sudo[227572]: pam_unix(sudo:session): session closed for user root
Nov 29 06:40:21 compute-0 sudo[227636]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:40:21 compute-0 sudo[227636]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:40:21 compute-0 sudo[227636]: pam_unix(sudo:session): session closed for user root
Nov 29 06:40:21 compute-0 sudo[227661]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:40:21 compute-0 sudo[227661]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:40:21 compute-0 sudo[227661]: pam_unix(sudo:session): session closed for user root
Nov 29 06:40:22 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 06:40:22 compute-0 sudo[227710]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 06:40:22 compute-0 sudo[227710]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:40:22 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:40:22 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 06:40:22 compute-0 sshd-session[227467]: Received disconnect from 118.193.39.127 port 52312:11: Bye Bye [preauth]
Nov 29 06:40:22 compute-0 sshd-session[227467]: Disconnected from authenticating user root 118.193.39.127 port 52312 [preauth]
Nov 29 06:40:22 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:40:22 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:40:22 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:40:22.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:40:22 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:40:22 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Nov 29 06:40:22 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 29 06:40:22 compute-0 sudo[227879]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvoyaznzhvflltmzivrjftcsxdtwfbyd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398422.118152-1150-80672393558724/AnsiballZ_stat.py'
Nov 29 06:40:22 compute-0 sudo[227879]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:40:22 compute-0 sudo[227710]: pam_unix(sudo:session): session closed for user root
Nov 29 06:40:22 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v838: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:40:22 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 29 06:40:22 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 06:40:22 compute-0 python3.9[227882]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:40:22 compute-0 sudo[227879]: pam_unix(sudo:session): session closed for user root
Nov 29 06:40:23 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 06:40:23 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:40:23 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 06:40:23 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 06:40:23 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 06:40:23 compute-0 sudo[227973]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jhldwbugxigjefbgqokrywbhpwllbika ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398422.118152-1150-80672393558724/AnsiballZ_file.py'
Nov 29 06:40:23 compute-0 sudo[227973]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:40:23 compute-0 python3.9[227975]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:40:23 compute-0 sudo[227973]: pam_unix(sudo:session): session closed for user root
Nov 29 06:40:23 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:40:23 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev 97f8f56f-a14c-475e-8c5b-f7fa59061626 does not exist
Nov 29 06:40:23 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev 917418f7-2d9f-4a8a-bb67-4def966e263e does not exist
Nov 29 06:40:23 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev 1265e801-6934-4613-957d-2ac11acee9a5 does not exist
Nov 29 06:40:23 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 06:40:23 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 06:40:23 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 06:40:23 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 06:40:23 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 06:40:23 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:40:23 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:40:23 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:40:23 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 29 06:40:23 compute-0 ceph-mon[74654]: pgmap v838: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:40:23 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 06:40:23 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:40:23 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 06:40:23 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:40:23 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:40:23 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:40:23.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:40:23 compute-0 sudo[227997]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:40:23 compute-0 sudo[227997]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:40:23 compute-0 sudo[227997]: pam_unix(sudo:session): session closed for user root
Nov 29 06:40:23 compute-0 sudo[228025]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:40:23 compute-0 sudo[228025]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:40:23 compute-0 sudo[228025]: pam_unix(sudo:session): session closed for user root
Nov 29 06:40:23 compute-0 sudo[228050]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:40:23 compute-0 sudo[228050]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:40:23 compute-0 sudo[228050]: pam_unix(sudo:session): session closed for user root
Nov 29 06:40:23 compute-0 sudo[228075]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Nov 29 06:40:23 compute-0 sudo[228075]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:40:24 compute-0 sudo[228280]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zwaglxmrpfmjqzpobnfaasmnlvmtirlt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398423.7344372-1186-72313734873192/AnsiballZ_stat.py'
Nov 29 06:40:24 compute-0 sudo[228280]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:40:24 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:40:24 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:40:24 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:40:24.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:40:24 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:40:24 compute-0 podman[228239]: 2025-11-29 06:40:24.03989563 +0000 UTC m=+0.024083413 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:40:24 compute-0 podman[228239]: 2025-11-29 06:40:24.187860096 +0000 UTC m=+0.172047859 container create 1b2de9e320ddf678c86f43501e1142222d507bc669cc473afab82a7fcc6ac3de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_pasteur, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 29 06:40:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:40:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:40:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:40:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:40:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:40:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:40:24 compute-0 python3.9[228282]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:40:24 compute-0 ceph-mon[74654]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #33. Immutable memtables: 0.
Nov 29 06:40:24 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:40:24.338442) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 06:40:24 compute-0 ceph-mon[74654]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 33
Nov 29 06:40:24 compute-0 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764398424338478, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 676, "num_deletes": 252, "total_data_size": 932655, "memory_usage": 944960, "flush_reason": "Manual Compaction"}
Nov 29 06:40:24 compute-0 ceph-mon[74654]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #34: started
Nov 29 06:40:24 compute-0 sudo[228280]: pam_unix(sudo:session): session closed for user root
Nov 29 06:40:24 compute-0 systemd[1]: Started libpod-conmon-1b2de9e320ddf678c86f43501e1142222d507bc669cc473afab82a7fcc6ac3de.scope.
Nov 29 06:40:24 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:40:24 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v839: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:40:24 compute-0 sudo[228363]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xzfktrddiitubsrlvcialooklslnnlvh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398423.7344372-1186-72313734873192/AnsiballZ_file.py'
Nov 29 06:40:24 compute-0 sudo[228363]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:40:25 compute-0 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764398425005737, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 34, "file_size": 924399, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14491, "largest_seqno": 15166, "table_properties": {"data_size": 920862, "index_size": 1381, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 6914, "raw_average_key_size": 16, "raw_value_size": 913841, "raw_average_value_size": 2191, "num_data_blocks": 63, "num_entries": 417, "num_filter_entries": 417, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764398369, "oldest_key_time": 1764398369, "file_creation_time": 1764398424, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cb6c8f8f-b3b4-4901-9b8e-6f9d7b0da908", "db_session_id": "VL4WOW4AK06DDHF5VQBP", "orig_file_number": 34, "seqno_to_time_mapping": "N/A"}}
Nov 29 06:40:25 compute-0 ceph-mon[74654]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 667387 microseconds, and 3184 cpu microseconds.
Nov 29 06:40:25 compute-0 ceph-mon[74654]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 06:40:25 compute-0 python3.9[228365]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:40:25 compute-0 sudo[228363]: pam_unix(sudo:session): session closed for user root
Nov 29 06:40:25 compute-0 sshd-session[227938]: Received disconnect from 101.47.163.116 port 33774:11: Bye Bye [preauth]
Nov 29 06:40:25 compute-0 sshd-session[227938]: Disconnected from authenticating user root 101.47.163.116 port 33774 [preauth]
Nov 29 06:40:25 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:40:25.005821) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #34: 924399 bytes OK
Nov 29 06:40:25 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:40:25.005847) [db/memtable_list.cc:519] [default] Level-0 commit table #34 started
Nov 29 06:40:25 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:40:25.188668) [db/memtable_list.cc:722] [default] Level-0 commit table #34: memtable #1 done
Nov 29 06:40:25 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:40:25.188739) EVENT_LOG_v1 {"time_micros": 1764398425188729, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 06:40:25 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:40:25.188759) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 06:40:25 compute-0 ceph-mon[74654]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 929159, prev total WAL file size 933954, number of live WAL files 2.
Nov 29 06:40:25 compute-0 ceph-mon[74654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000030.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 06:40:25 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:40:25.189438) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760030' seq:72057594037927935, type:22 .. '6B7600323533' seq:0, type:0; will stop at (end)
Nov 29 06:40:25 compute-0 ceph-mon[74654]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 06:40:25 compute-0 ceph-mon[74654]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [34(902KB)], [32(8955KB)]
Nov 29 06:40:25 compute-0 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764398425189627, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [34], "files_L6": [32], "score": -1, "input_data_size": 10094481, "oldest_snapshot_seqno": -1}
Nov 29 06:40:25 compute-0 podman[228239]: 2025-11-29 06:40:25.41887977 +0000 UTC m=+1.403067603 container init 1b2de9e320ddf678c86f43501e1142222d507bc669cc473afab82a7fcc6ac3de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_pasteur, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 06:40:25 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:40:25 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 06:40:25 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 06:40:25 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:40:25 compute-0 podman[228239]: 2025-11-29 06:40:25.42931719 +0000 UTC m=+1.413504933 container start 1b2de9e320ddf678c86f43501e1142222d507bc669cc473afab82a7fcc6ac3de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_pasteur, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 29 06:40:25 compute-0 priceless_pasteur[228310]: 167 167
Nov 29 06:40:25 compute-0 systemd[1]: libpod-1b2de9e320ddf678c86f43501e1142222d507bc669cc473afab82a7fcc6ac3de.scope: Deactivated successfully.
Nov 29 06:40:25 compute-0 ceph-mon[74654]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #35: 4510 keys, 9525503 bytes, temperature: kUnknown
Nov 29 06:40:25 compute-0 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764398425454367, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 35, "file_size": 9525503, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9492641, "index_size": 20464, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11333, "raw_key_size": 112118, "raw_average_key_size": 24, "raw_value_size": 9408376, "raw_average_value_size": 2086, "num_data_blocks": 864, "num_entries": 4510, "num_filter_entries": 4510, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764396963, "oldest_key_time": 0, "file_creation_time": 1764398425, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cb6c8f8f-b3b4-4901-9b8e-6f9d7b0da908", "db_session_id": "VL4WOW4AK06DDHF5VQBP", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Nov 29 06:40:25 compute-0 ceph-mon[74654]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 06:40:25 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:40:25 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:40:25 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:40:25.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:40:25 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:40:25.454747) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 9525503 bytes
Nov 29 06:40:25 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:40:25.518733) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 38.1 rd, 36.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 8.7 +0.0 blob) out(9.1 +0.0 blob), read-write-amplify(21.2) write-amplify(10.3) OK, records in: 5027, records dropped: 517 output_compression: NoCompression
Nov 29 06:40:25 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:40:25.518785) EVENT_LOG_v1 {"time_micros": 1764398425518772, "job": 14, "event": "compaction_finished", "compaction_time_micros": 264907, "compaction_time_cpu_micros": 25766, "output_level": 6, "num_output_files": 1, "total_output_size": 9525503, "num_input_records": 5027, "num_output_records": 4510, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 06:40:25 compute-0 ceph-mon[74654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000034.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 06:40:25 compute-0 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764398425519063, "job": 14, "event": "table_file_deletion", "file_number": 34}
Nov 29 06:40:25 compute-0 ceph-mon[74654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 06:40:25 compute-0 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764398425520412, "job": 14, "event": "table_file_deletion", "file_number": 32}
Nov 29 06:40:25 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:40:25.189324) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 06:40:25 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:40:25.520484) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 06:40:25 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:40:25.520490) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 06:40:25 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:40:25.520492) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 06:40:25 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:40:25.520494) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 06:40:25 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:40:25.520496) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 06:40:25 compute-0 podman[228239]: 2025-11-29 06:40:25.55795746 +0000 UTC m=+1.542145243 container attach 1b2de9e320ddf678c86f43501e1142222d507bc669cc473afab82a7fcc6ac3de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_pasteur, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 06:40:25 compute-0 podman[228239]: 2025-11-29 06:40:25.559124524 +0000 UTC m=+1.543312287 container died 1b2de9e320ddf678c86f43501e1142222d507bc669cc473afab82a7fcc6ac3de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_pasteur, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0)
Nov 29 06:40:25 compute-0 sudo[228527]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdpfezlroogerwgjalkdpmbsmszretua ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398425.3336887-1222-6232260612375/AnsiballZ_systemd.py'
Nov 29 06:40:25 compute-0 sudo[228527]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:40:26 compute-0 python3.9[228529]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 06:40:26 compute-0 systemd[1]: Reloading.
Nov 29 06:40:26 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:40:26 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:40:26 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:40:26.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:40:26 compute-0 systemd-rc-local-generator[228555]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 06:40:26 compute-0 systemd-sysv-generator[228558]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 06:40:26 compute-0 systemd[1]: Starting Create netns directory...
Nov 29 06:40:26 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 29 06:40:26 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 29 06:40:26 compute-0 systemd[1]: Finished Create netns directory.
Nov 29 06:40:26 compute-0 sudo[228527]: pam_unix(sudo:session): session closed for user root
Nov 29 06:40:26 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v840: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:40:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-e61b0a30fde40895470d6e92d3a30c540c196a598f43ee9d609b105b1abaf0a7-merged.mount: Deactivated successfully.
Nov 29 06:40:26 compute-0 ceph-mon[74654]: pgmap v839: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:40:27 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:40:27 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:40:27 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:40:27.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:40:27 compute-0 podman[228239]: 2025-11-29 06:40:27.475694854 +0000 UTC m=+3.459882597 container remove 1b2de9e320ddf678c86f43501e1142222d507bc669cc473afab82a7fcc6ac3de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_pasteur, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 06:40:27 compute-0 sudo[228723]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjzltdcakmntiqeakkkcphrvurouemeo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398427.1549366-1252-16589088357049/AnsiballZ_file.py'
Nov 29 06:40:27 compute-0 sudo[228723]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:40:27 compute-0 systemd[1]: libpod-conmon-1b2de9e320ddf678c86f43501e1142222d507bc669cc473afab82a7fcc6ac3de.scope: Deactivated successfully.
Nov 29 06:40:27 compute-0 python3.9[228725]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 06:40:27 compute-0 sudo[228723]: pam_unix(sudo:session): session closed for user root
Nov 29 06:40:27 compute-0 podman[228731]: 2025-11-29 06:40:27.642006127 +0000 UTC m=+0.023479916 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:40:27 compute-0 podman[228731]: 2025-11-29 06:40:27.779337167 +0000 UTC m=+0.160810936 container create be74943aae6f2be94595527776014d6514f8cbf403d35dae8464b1d71d385f3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_faraday, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 06:40:28 compute-0 systemd[1]: Started libpod-conmon-be74943aae6f2be94595527776014d6514f8cbf403d35dae8464b1d71d385f3c.scope.
Nov 29 06:40:28 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:40:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b4fb003868971a3cc043beb0e80173be7088ff93f618d3b6f37ca652356a748/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 06:40:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b4fb003868971a3cc043beb0e80173be7088ff93f618d3b6f37ca652356a748/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:40:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b4fb003868971a3cc043beb0e80173be7088ff93f618d3b6f37ca652356a748/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:40:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b4fb003868971a3cc043beb0e80173be7088ff93f618d3b6f37ca652356a748/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 06:40:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b4fb003868971a3cc043beb0e80173be7088ff93f618d3b6f37ca652356a748/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 06:40:28 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:40:28 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:40:28 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:40:28.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:40:28 compute-0 ceph-mon[74654]: pgmap v840: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:40:28 compute-0 podman[228731]: 2025-11-29 06:40:28.397190187 +0000 UTC m=+0.778663966 container init be74943aae6f2be94595527776014d6514f8cbf403d35dae8464b1d71d385f3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_faraday, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 06:40:28 compute-0 podman[228731]: 2025-11-29 06:40:28.411795927 +0000 UTC m=+0.793269696 container start be74943aae6f2be94595527776014d6514f8cbf403d35dae8464b1d71d385f3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_faraday, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 29 06:40:28 compute-0 sudo[228899]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-edntonvbdagsxfknuzmdbfrdxypkyyfi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398428.0541244-1276-73338465434031/AnsiballZ_stat.py'
Nov 29 06:40:28 compute-0 sudo[228899]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:40:28 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v841: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:40:28 compute-0 python3.9[228903]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:40:28 compute-0 sudo[228899]: pam_unix(sudo:session): session closed for user root
Nov 29 06:40:29 compute-0 podman[228731]: 2025-11-29 06:40:29.103695176 +0000 UTC m=+1.485168965 container attach be74943aae6f2be94595527776014d6514f8cbf403d35dae8464b1d71d385f3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_faraday, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 06:40:29 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:40:29 compute-0 recursing_faraday[228796]: --> passed data devices: 0 physical, 1 LVM
Nov 29 06:40:29 compute-0 recursing_faraday[228796]: --> relative data size: 1.0
Nov 29 06:40:29 compute-0 recursing_faraday[228796]: --> All data devices are unavailable
Nov 29 06:40:29 compute-0 systemd[1]: libpod-be74943aae6f2be94595527776014d6514f8cbf403d35dae8464b1d71d385f3c.scope: Deactivated successfully.
Nov 29 06:40:29 compute-0 podman[228731]: 2025-11-29 06:40:29.300834466 +0000 UTC m=+1.682308265 container died be74943aae6f2be94595527776014d6514f8cbf403d35dae8464b1d71d385f3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_faraday, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 06:40:29 compute-0 sudo[229046]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbiqziyovgjmbshvielktbumahqhtwyk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398428.0541244-1276-73338465434031/AnsiballZ_copy.py'
Nov 29 06:40:29 compute-0 sudo[229046]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:40:29 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:40:29 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:40:29 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:40:29.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:40:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 06:40:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 06:40:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 06:40:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 06:40:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 06:40:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 06:40:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 06:40:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 06:40:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 06:40:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 06:40:29 compute-0 python3.9[229048]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/multipathd/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764398428.0541244-1276-73338465434031/.source _original_basename=healthcheck follow=False checksum=af9d0c1c8f3cb0e30ce9609be9d5b01924d0d23f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 29 06:40:29 compute-0 sudo[229046]: pam_unix(sudo:session): session closed for user root
Nov 29 06:40:30 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:40:30 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:40:30 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:40:30.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:40:30 compute-0 sudo[229073]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:40:30 compute-0 sudo[229073]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:40:30 compute-0 sudo[229073]: pam_unix(sudo:session): session closed for user root
Nov 29 06:40:30 compute-0 sudo[229098]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:40:30 compute-0 sudo[229098]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:40:30 compute-0 ceph-mon[74654]: pgmap v841: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:40:30 compute-0 sudo[229098]: pam_unix(sudo:session): session closed for user root
Nov 29 06:40:30 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v842: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:40:30 compute-0 sudo[229248]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fvajpkmrditigftazjwfqpuyujsywfmx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398430.2983768-1327-231975180401106/AnsiballZ_file.py'
Nov 29 06:40:30 compute-0 sudo[229248]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:40:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-9b4fb003868971a3cc043beb0e80173be7088ff93f618d3b6f37ca652356a748-merged.mount: Deactivated successfully.
Nov 29 06:40:30 compute-0 python3.9[229251]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 06:40:30 compute-0 sudo[229248]: pam_unix(sudo:session): session closed for user root
Nov 29 06:40:31 compute-0 podman[228731]: 2025-11-29 06:40:31.153048035 +0000 UTC m=+3.534521834 container remove be74943aae6f2be94595527776014d6514f8cbf403d35dae8464b1d71d385f3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_faraday, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 06:40:31 compute-0 systemd[1]: libpod-conmon-be74943aae6f2be94595527776014d6514f8cbf403d35dae8464b1d71d385f3c.scope: Deactivated successfully.
Nov 29 06:40:31 compute-0 sudo[228075]: pam_unix(sudo:session): session closed for user root
Nov 29 06:40:31 compute-0 sudo[229319]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:40:31 compute-0 sudo[229319]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:40:31 compute-0 sudo[229319]: pam_unix(sudo:session): session closed for user root
Nov 29 06:40:31 compute-0 sudo[229356]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:40:31 compute-0 sudo[229356]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:40:31 compute-0 sudo[229356]: pam_unix(sudo:session): session closed for user root
Nov 29 06:40:31 compute-0 sudo[229404]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:40:31 compute-0 sudo[229404]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:40:31 compute-0 sudo[229404]: pam_unix(sudo:session): session closed for user root
Nov 29 06:40:31 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:40:31 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:40:31 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:40:31.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:40:31 compute-0 sudo[229454]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -- lvm list --format json
Nov 29 06:40:31 compute-0 sudo[229454]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:40:31 compute-0 sudo[229502]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dmczhrhjawoeyvevediwlkzjhuwznwui ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398431.1768239-1351-92268449728319/AnsiballZ_stat.py'
Nov 29 06:40:31 compute-0 sudo[229502]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:40:31 compute-0 python3.9[229506]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:40:31 compute-0 sudo[229502]: pam_unix(sudo:session): session closed for user root
Nov 29 06:40:31 compute-0 podman[229571]: 2025-11-29 06:40:31.815654052 +0000 UTC m=+0.041489964 container create 7e094777e910dc423ff854828fd96fe989f17c3a4fabe9489a6914276afd2c0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_cannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 06:40:31 compute-0 systemd[1]: Started libpod-conmon-7e094777e910dc423ff854828fd96fe989f17c3a4fabe9489a6914276afd2c0a.scope.
Nov 29 06:40:31 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:40:31 compute-0 podman[229571]: 2025-11-29 06:40:31.795503403 +0000 UTC m=+0.021339295 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:40:31 compute-0 podman[229571]: 2025-11-29 06:40:31.898452104 +0000 UTC m=+0.124288066 container init 7e094777e910dc423ff854828fd96fe989f17c3a4fabe9489a6914276afd2c0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_cannon, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 29 06:40:31 compute-0 podman[229571]: 2025-11-29 06:40:31.90527668 +0000 UTC m=+0.131112592 container start 7e094777e910dc423ff854828fd96fe989f17c3a4fabe9489a6914276afd2c0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_cannon, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 06:40:31 compute-0 nifty_cannon[229611]: 167 167
Nov 29 06:40:31 compute-0 podman[229571]: 2025-11-29 06:40:31.911359355 +0000 UTC m=+0.137195267 container attach 7e094777e910dc423ff854828fd96fe989f17c3a4fabe9489a6914276afd2c0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_cannon, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 06:40:31 compute-0 systemd[1]: libpod-7e094777e910dc423ff854828fd96fe989f17c3a4fabe9489a6914276afd2c0a.scope: Deactivated successfully.
Nov 29 06:40:31 compute-0 conmon[229611]: conmon 7e094777e910dc423ff8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7e094777e910dc423ff854828fd96fe989f17c3a4fabe9489a6914276afd2c0a.scope/container/memory.events
Nov 29 06:40:31 compute-0 podman[229571]: 2025-11-29 06:40:31.913542638 +0000 UTC m=+0.139378550 container died 7e094777e910dc423ff854828fd96fe989f17c3a4fabe9489a6914276afd2c0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_cannon, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 29 06:40:31 compute-0 ceph-mon[74654]: pgmap v842: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:40:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-521f917365407235f33703f6d5055f307459a7544f5d75fc14e95e04fc3fc8cc-merged.mount: Deactivated successfully.
Nov 29 06:40:31 compute-0 podman[229571]: 2025-11-29 06:40:31.962020382 +0000 UTC m=+0.187856264 container remove 7e094777e910dc423ff854828fd96fe989f17c3a4fabe9489a6914276afd2c0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_cannon, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 29 06:40:31 compute-0 systemd[1]: libpod-conmon-7e094777e910dc423ff854828fd96fe989f17c3a4fabe9489a6914276afd2c0a.scope: Deactivated successfully.
Nov 29 06:40:32 compute-0 sudo[229708]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dsvtlhomhearrvatgglhsvvxejrwfyas ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398431.1768239-1351-92268449728319/AnsiballZ_copy.py'
Nov 29 06:40:32 compute-0 sudo[229708]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:40:32 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:40:32 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:40:32 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:40:32.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:40:32 compute-0 podman[229706]: 2025-11-29 06:40:32.158374739 +0000 UTC m=+0.044333226 container create e383cf39f3fcd59be8128407fd65c3efe698aa22d7f31c3f6f1811e963579ac6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_wu, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 29 06:40:32 compute-0 systemd[1]: Started libpod-conmon-e383cf39f3fcd59be8128407fd65c3efe698aa22d7f31c3f6f1811e963579ac6.scope.
Nov 29 06:40:32 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:40:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9aa4e6e5a3cfdb8a0a13535542de4fa53995028302654c2225761e7310751cf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 06:40:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9aa4e6e5a3cfdb8a0a13535542de4fa53995028302654c2225761e7310751cf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:40:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9aa4e6e5a3cfdb8a0a13535542de4fa53995028302654c2225761e7310751cf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:40:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9aa4e6e5a3cfdb8a0a13535542de4fa53995028302654c2225761e7310751cf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 06:40:32 compute-0 podman[229706]: 2025-11-29 06:40:32.138875238 +0000 UTC m=+0.024833745 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:40:32 compute-0 podman[229706]: 2025-11-29 06:40:32.243774825 +0000 UTC m=+0.129733342 container init e383cf39f3fcd59be8128407fd65c3efe698aa22d7f31c3f6f1811e963579ac6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_wu, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 06:40:32 compute-0 podman[229706]: 2025-11-29 06:40:32.252202998 +0000 UTC m=+0.138161485 container start e383cf39f3fcd59be8128407fd65c3efe698aa22d7f31c3f6f1811e963579ac6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_wu, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 06:40:32 compute-0 podman[229706]: 2025-11-29 06:40:32.256575463 +0000 UTC m=+0.142533970 container attach e383cf39f3fcd59be8128407fd65c3efe698aa22d7f31c3f6f1811e963579ac6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_wu, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 06:40:32 compute-0 python3.9[229715]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/multipathd.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764398431.1768239-1351-92268449728319/.source.json _original_basename=.zmezc377 follow=False checksum=3f7959ee8ac9757398adcc451c3b416c957d7c14 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:40:32 compute-0 sudo[229708]: pam_unix(sudo:session): session closed for user root
Nov 29 06:40:32 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v843: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:40:32 compute-0 sudo[229879]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sqqqzowgepnubigmdqywvxnztmsmmygn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398432.5155997-1396-186381089893106/AnsiballZ_file.py'
Nov 29 06:40:32 compute-0 sudo[229879]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:40:33 compute-0 amazing_wu[229725]: {
Nov 29 06:40:33 compute-0 amazing_wu[229725]:     "1": [
Nov 29 06:40:33 compute-0 amazing_wu[229725]:         {
Nov 29 06:40:33 compute-0 amazing_wu[229725]:             "devices": [
Nov 29 06:40:33 compute-0 amazing_wu[229725]:                 "/dev/loop3"
Nov 29 06:40:33 compute-0 amazing_wu[229725]:             ],
Nov 29 06:40:33 compute-0 amazing_wu[229725]:             "lv_name": "ceph_lv0",
Nov 29 06:40:33 compute-0 amazing_wu[229725]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 06:40:33 compute-0 amazing_wu[229725]:             "lv_size": "7511998464",
Nov 29 06:40:33 compute-0 amazing_wu[229725]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=336ec58c-893b-528f-a0c1-6ed1196bc047,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=91f280f1-e534-4adc-bf70-98711580c2dd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 06:40:33 compute-0 amazing_wu[229725]:             "lv_uuid": "G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP",
Nov 29 06:40:33 compute-0 amazing_wu[229725]:             "name": "ceph_lv0",
Nov 29 06:40:33 compute-0 amazing_wu[229725]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 06:40:33 compute-0 amazing_wu[229725]:             "tags": {
Nov 29 06:40:33 compute-0 amazing_wu[229725]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 06:40:33 compute-0 amazing_wu[229725]:                 "ceph.block_uuid": "G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP",
Nov 29 06:40:33 compute-0 amazing_wu[229725]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 06:40:33 compute-0 amazing_wu[229725]:                 "ceph.cluster_fsid": "336ec58c-893b-528f-a0c1-6ed1196bc047",
Nov 29 06:40:33 compute-0 amazing_wu[229725]:                 "ceph.cluster_name": "ceph",
Nov 29 06:40:33 compute-0 amazing_wu[229725]:                 "ceph.crush_device_class": "",
Nov 29 06:40:33 compute-0 amazing_wu[229725]:                 "ceph.encrypted": "0",
Nov 29 06:40:33 compute-0 amazing_wu[229725]:                 "ceph.osd_fsid": "91f280f1-e534-4adc-bf70-98711580c2dd",
Nov 29 06:40:33 compute-0 amazing_wu[229725]:                 "ceph.osd_id": "1",
Nov 29 06:40:33 compute-0 amazing_wu[229725]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 06:40:33 compute-0 amazing_wu[229725]:                 "ceph.type": "block",
Nov 29 06:40:33 compute-0 amazing_wu[229725]:                 "ceph.vdo": "0"
Nov 29 06:40:33 compute-0 amazing_wu[229725]:             },
Nov 29 06:40:33 compute-0 amazing_wu[229725]:             "type": "block",
Nov 29 06:40:33 compute-0 amazing_wu[229725]:             "vg_name": "ceph_vg0"
Nov 29 06:40:33 compute-0 amazing_wu[229725]:         }
Nov 29 06:40:33 compute-0 amazing_wu[229725]:     ]
Nov 29 06:40:33 compute-0 amazing_wu[229725]: }
Nov 29 06:40:33 compute-0 systemd[1]: libpod-e383cf39f3fcd59be8128407fd65c3efe698aa22d7f31c3f6f1811e963579ac6.scope: Deactivated successfully.
Nov 29 06:40:33 compute-0 podman[229706]: 2025-11-29 06:40:33.044487044 +0000 UTC m=+0.930445571 container died e383cf39f3fcd59be8128407fd65c3efe698aa22d7f31c3f6f1811e963579ac6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_wu, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 06:40:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-e9aa4e6e5a3cfdb8a0a13535542de4fa53995028302654c2225761e7310751cf-merged.mount: Deactivated successfully.
Nov 29 06:40:33 compute-0 python3.9[229881]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:40:33 compute-0 podman[229706]: 2025-11-29 06:40:33.12640524 +0000 UTC m=+1.012363747 container remove e383cf39f3fcd59be8128407fd65c3efe698aa22d7f31c3f6f1811e963579ac6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_wu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 29 06:40:33 compute-0 systemd[1]: libpod-conmon-e383cf39f3fcd59be8128407fd65c3efe698aa22d7f31c3f6f1811e963579ac6.scope: Deactivated successfully.
Nov 29 06:40:33 compute-0 sudo[229454]: pam_unix(sudo:session): session closed for user root
Nov 29 06:40:33 compute-0 sudo[229879]: pam_unix(sudo:session): session closed for user root
Nov 29 06:40:33 compute-0 sudo[229901]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:40:33 compute-0 sudo[229901]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:40:33 compute-0 sudo[229901]: pam_unix(sudo:session): session closed for user root
Nov 29 06:40:33 compute-0 sudo[229950]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:40:33 compute-0 sudo[229950]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:40:33 compute-0 sudo[229950]: pam_unix(sudo:session): session closed for user root
Nov 29 06:40:33 compute-0 sudo[229975]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:40:33 compute-0 sudo[229975]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:40:33 compute-0 sudo[229975]: pam_unix(sudo:session): session closed for user root
Nov 29 06:40:33 compute-0 systemd[1]: virtqemud.service: Deactivated successfully.
Nov 29 06:40:33 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Nov 29 06:40:33 compute-0 sudo[230000]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -- raw list --format json
Nov 29 06:40:33 compute-0 sudo[230000]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:40:33 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:40:33 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:40:33 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:40:33.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:40:33 compute-0 ceph-mon[74654]: pgmap v843: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:40:33 compute-0 sudo[230193]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-agandnklcnocslhxtfgqgrxenfumqogg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398433.407975-1420-254714352840392/AnsiballZ_stat.py'
Nov 29 06:40:33 compute-0 sudo[230193]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:40:33 compute-0 podman[230194]: 2025-11-29 06:40:33.805593483 +0000 UTC m=+0.049097933 container create decf6a05f8fc224ab1790c06d5673247d5377f115431305d9c186afdc0e6353d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_stonebraker, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 29 06:40:33 compute-0 systemd[1]: Started libpod-conmon-decf6a05f8fc224ab1790c06d5673247d5377f115431305d9c186afdc0e6353d.scope.
Nov 29 06:40:33 compute-0 podman[230194]: 2025-11-29 06:40:33.78357439 +0000 UTC m=+0.027078830 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:40:33 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:40:33 compute-0 podman[230194]: 2025-11-29 06:40:33.89938801 +0000 UTC m=+0.142892440 container init decf6a05f8fc224ab1790c06d5673247d5377f115431305d9c186afdc0e6353d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_stonebraker, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 29 06:40:33 compute-0 podman[230194]: 2025-11-29 06:40:33.908396669 +0000 UTC m=+0.151901119 container start decf6a05f8fc224ab1790c06d5673247d5377f115431305d9c186afdc0e6353d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_stonebraker, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 06:40:33 compute-0 podman[230194]: 2025-11-29 06:40:33.912514278 +0000 UTC m=+0.156018688 container attach decf6a05f8fc224ab1790c06d5673247d5377f115431305d9c186afdc0e6353d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_stonebraker, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 29 06:40:33 compute-0 mystifying_stonebraker[230212]: 167 167
Nov 29 06:40:33 compute-0 systemd[1]: libpod-decf6a05f8fc224ab1790c06d5673247d5377f115431305d9c186afdc0e6353d.scope: Deactivated successfully.
Nov 29 06:40:33 compute-0 podman[230194]: 2025-11-29 06:40:33.918814039 +0000 UTC m=+0.162318509 container died decf6a05f8fc224ab1790c06d5673247d5377f115431305d9c186afdc0e6353d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_stonebraker, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 06:40:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-20b004c71362bebcbb9ec09681dfe4f97429e9121f859388f7e265f9e734a1ad-merged.mount: Deactivated successfully.
Nov 29 06:40:33 compute-0 podman[230194]: 2025-11-29 06:40:33.962737302 +0000 UTC m=+0.206241712 container remove decf6a05f8fc224ab1790c06d5673247d5377f115431305d9c186afdc0e6353d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_stonebraker, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 29 06:40:33 compute-0 systemd[1]: libpod-conmon-decf6a05f8fc224ab1790c06d5673247d5377f115431305d9c186afdc0e6353d.scope: Deactivated successfully.
Nov 29 06:40:34 compute-0 sudo[230193]: pam_unix(sudo:session): session closed for user root
Nov 29 06:40:34 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:40:34 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:40:34 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:40:34.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:40:34 compute-0 podman[230258]: 2025-11-29 06:40:34.136056657 +0000 UTC m=+0.049570277 container create fc783aa8e4c2780a8de5f41d47c892aa2ec961cc2438d45cf206477d23129906 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_bohr, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 29 06:40:34 compute-0 systemd[1]: Started libpod-conmon-fc783aa8e4c2780a8de5f41d47c892aa2ec961cc2438d45cf206477d23129906.scope.
Nov 29 06:40:34 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:40:34 compute-0 podman[230258]: 2025-11-29 06:40:34.117231376 +0000 UTC m=+0.030745016 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:40:34 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:40:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc1fc54721348bdd42457435458eb2ae4f05aa656614261b270eb07364e027d2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 06:40:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc1fc54721348bdd42457435458eb2ae4f05aa656614261b270eb07364e027d2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:40:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc1fc54721348bdd42457435458eb2ae4f05aa656614261b270eb07364e027d2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:40:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc1fc54721348bdd42457435458eb2ae4f05aa656614261b270eb07364e027d2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 06:40:34 compute-0 podman[230258]: 2025-11-29 06:40:34.23351423 +0000 UTC m=+0.147027860 container init fc783aa8e4c2780a8de5f41d47c892aa2ec961cc2438d45cf206477d23129906 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_bohr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 29 06:40:34 compute-0 podman[230258]: 2025-11-29 06:40:34.243851257 +0000 UTC m=+0.157364897 container start fc783aa8e4c2780a8de5f41d47c892aa2ec961cc2438d45cf206477d23129906 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_bohr, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 29 06:40:34 compute-0 podman[230258]: 2025-11-29 06:40:34.249689015 +0000 UTC m=+0.163202635 container attach fc783aa8e4c2780a8de5f41d47c892aa2ec961cc2438d45cf206477d23129906 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_bohr, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 06:40:34 compute-0 sudo[230375]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fagmajcmoqbhxadvtbvkvbkvoqbfblzm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398433.407975-1420-254714352840392/AnsiballZ_copy.py'
Nov 29 06:40:34 compute-0 sudo[230375]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:40:34 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v844: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:40:34 compute-0 sudo[230375]: pam_unix(sudo:session): session closed for user root
Nov 29 06:40:35 compute-0 xenodochial_bohr[230298]: {
Nov 29 06:40:35 compute-0 xenodochial_bohr[230298]:     "91f280f1-e534-4adc-bf70-98711580c2dd": {
Nov 29 06:40:35 compute-0 xenodochial_bohr[230298]:         "ceph_fsid": "336ec58c-893b-528f-a0c1-6ed1196bc047",
Nov 29 06:40:35 compute-0 xenodochial_bohr[230298]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 06:40:35 compute-0 xenodochial_bohr[230298]:         "osd_id": 1,
Nov 29 06:40:35 compute-0 xenodochial_bohr[230298]:         "osd_uuid": "91f280f1-e534-4adc-bf70-98711580c2dd",
Nov 29 06:40:35 compute-0 xenodochial_bohr[230298]:         "type": "bluestore"
Nov 29 06:40:35 compute-0 xenodochial_bohr[230298]:     }
Nov 29 06:40:35 compute-0 xenodochial_bohr[230298]: }
Nov 29 06:40:35 compute-0 systemd[1]: libpod-fc783aa8e4c2780a8de5f41d47c892aa2ec961cc2438d45cf206477d23129906.scope: Deactivated successfully.
Nov 29 06:40:35 compute-0 podman[230419]: 2025-11-29 06:40:35.168256334 +0000 UTC m=+0.022446837 container died fc783aa8e4c2780a8de5f41d47c892aa2ec961cc2438d45cf206477d23129906 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_bohr, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 29 06:40:35 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:40:35 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:40:35 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:40:35.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:40:35 compute-0 ceph-mon[74654]: pgmap v844: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:40:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-cc1fc54721348bdd42457435458eb2ae4f05aa656614261b270eb07364e027d2-merged.mount: Deactivated successfully.
Nov 29 06:40:35 compute-0 podman[230419]: 2025-11-29 06:40:35.829661786 +0000 UTC m=+0.683852279 container remove fc783aa8e4c2780a8de5f41d47c892aa2ec961cc2438d45cf206477d23129906 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_bohr, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 29 06:40:35 compute-0 systemd[1]: libpod-conmon-fc783aa8e4c2780a8de5f41d47c892aa2ec961cc2438d45cf206477d23129906.scope: Deactivated successfully.
Nov 29 06:40:35 compute-0 podman[230420]: 2025-11-29 06:40:35.872543279 +0000 UTC m=+0.700405725 container health_status 81ea2bcb89266a0110a379c2083d8cc042460d4a35c8ed3bf349dd1083925000 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 06:40:35 compute-0 sudo[230000]: pam_unix(sudo:session): session closed for user root
Nov 29 06:40:35 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 06:40:35 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:40:35 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 06:40:35 compute-0 podman[230429]: 2025-11-29 06:40:35.905732414 +0000 UTC m=+0.734154346 container health_status b3f42e9a710907b47913576d27471d163da731262c1464357cff24681ce600c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller)
Nov 29 06:40:35 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:40:35 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev 29b23a40-97e4-4f65-9381-0ca0007f91ec does not exist
Nov 29 06:40:35 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev 6adbe004-1a6c-497c-b428-a391bc75fd3d does not exist
Nov 29 06:40:35 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev 647ffd37-79e2-43fd-92cf-3361ba30d02f does not exist
Nov 29 06:40:35 compute-0 sudo[230532]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:40:35 compute-0 sudo[230532]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:40:35 compute-0 sudo[230532]: pam_unix(sudo:session): session closed for user root
Nov 29 06:40:36 compute-0 sudo[230557]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 06:40:36 compute-0 sudo[230557]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:40:36 compute-0 sudo[230557]: pam_unix(sudo:session): session closed for user root
Nov 29 06:40:36 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:40:36 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:40:36 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:40:36.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:40:36 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v845: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:40:37 compute-0 sudo[230658]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qukuvjutappnuwxgkavuggjghngecwwi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398435.6443794-1471-21281637641263/AnsiballZ_container_config_data.py'
Nov 29 06:40:37 compute-0 sudo[230658]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:40:37 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:40:37 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:40:37 compute-0 sshd-session[230500]: Invalid user packer from 103.63.25.115 port 50058
Nov 29 06:40:37 compute-0 python3.9[230660]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False
Nov 29 06:40:37 compute-0 sudo[230658]: pam_unix(sudo:session): session closed for user root
Nov 29 06:40:37 compute-0 sshd-session[230500]: Received disconnect from 103.63.25.115 port 50058:11: Bye Bye [preauth]
Nov 29 06:40:37 compute-0 sshd-session[230500]: Disconnected from invalid user packer 103.63.25.115 port 50058 [preauth]
Nov 29 06:40:37 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:40:37 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:40:37 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:40:37.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:40:38 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:40:38 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:40:38 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:40:38.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:40:38 compute-0 ceph-mon[74654]: pgmap v845: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:40:38 compute-0 sudo[230810]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ivfbvcieouyfrgvhdbbhrcsjjwizprdq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398437.7251189-1498-64672682377705/AnsiballZ_container_config_hash.py'
Nov 29 06:40:38 compute-0 sudo[230810]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:40:38 compute-0 sshd-session[230605]: Invalid user usuario1 from 103.147.159.91 port 54308
Nov 29 06:40:38 compute-0 python3.9[230812]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 29 06:40:38 compute-0 sudo[230810]: pam_unix(sudo:session): session closed for user root
Nov 29 06:40:38 compute-0 sshd-session[230605]: Received disconnect from 103.147.159.91 port 54308:11: Bye Bye [preauth]
Nov 29 06:40:38 compute-0 sshd-session[230605]: Disconnected from invalid user usuario1 103.147.159.91 port 54308 [preauth]
Nov 29 06:40:38 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v846: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:40:39 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:40:39 compute-0 ceph-mon[74654]: pgmap v846: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:40:39 compute-0 sudo[230963]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iixhobxkhxwyyjvgddcrsnlmepvsytlp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398438.8955324-1525-150246905385412/AnsiballZ_podman_container_info.py'
Nov 29 06:40:39 compute-0 sudo[230963]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:40:39 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:40:39 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:40:39 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:40:39.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:40:39 compute-0 python3.9[230965]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Nov 29 06:40:39 compute-0 sudo[230963]: pam_unix(sudo:session): session closed for user root
Nov 29 06:40:40 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:40:40 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:40:40 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:40:40.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:40:40 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v847: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:40:41 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:40:41 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:40:41 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:40:41.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:40:41 compute-0 ceph-mon[74654]: pgmap v847: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:40:41 compute-0 sudo[231143]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dgcwieqxesfyztutbvuefcpqkbocneai ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764398441.0863175-1564-69517131640562/AnsiballZ_edpm_container_manage.py'
Nov 29 06:40:41 compute-0 sudo[231143]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:40:42 compute-0 python3[231145]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Nov 29 06:40:42 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:40:42 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:40:42 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:40:42.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:40:42 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v848: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:40:43 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:40:43 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:40:43 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:40:43.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:40:44 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:40:44 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:40:44 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:40:44.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:40:44 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:40:44 compute-0 ceph-mon[74654]: pgmap v848: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:40:44 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v849: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:40:44 compute-0 podman[231158]: 2025-11-29 06:40:44.924162705 +0000 UTC m=+2.816803193 image pull f275b8d168f7f57f31e3da49224019f39f95c80a833f083696a964527b07b54f quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Nov 29 06:40:45 compute-0 podman[231217]: 2025-11-29 06:40:45.064274474 +0000 UTC m=+0.027692567 image pull f275b8d168f7f57f31e3da49224019f39f95c80a833f083696a964527b07b54f quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Nov 29 06:40:45 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:40:45 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:40:45 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:40:45.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:40:45 compute-0 ceph-mon[74654]: pgmap v849: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:40:45 compute-0 podman[231217]: 2025-11-29 06:40:45.754505446 +0000 UTC m=+0.717923459 container create 843911ed0b6203707f0633a7e737420fbf54d55170a2d9cdc86db1752ff76af8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, managed_by=edpm_ansible, io.buildah.version=1.41.3)
Nov 29 06:40:45 compute-0 python3[231145]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name multipathd --conmon-pidfile /run/multipathd.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=multipathd --label container_name=multipathd --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run/udev:/run/udev --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /var/lib/openstack/healthchecks/multipathd:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Nov 29 06:40:45 compute-0 sudo[231143]: pam_unix(sudo:session): session closed for user root
Nov 29 06:40:46 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:40:46 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:40:46 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:40:46.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:40:46 compute-0 sudo[231405]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-affcihhhnjscxyrhqasngfpwxmnqntiz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398446.0914977-1588-28354089026768/AnsiballZ_stat.py'
Nov 29 06:40:46 compute-0 sudo[231405]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:40:46 compute-0 python3.9[231407]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 06:40:46 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v850: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:40:46 compute-0 sudo[231405]: pam_unix(sudo:session): session closed for user root
Nov 29 06:40:47 compute-0 sudo[231562]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ubcejgkexkxzakytgvgobtyzsbljytge ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398447.0138211-1615-121747224163378/AnsiballZ_file.py'
Nov 29 06:40:47 compute-0 sudo[231562]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:40:47 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:40:47 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:40:47 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:40:47.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:40:47 compute-0 sshd-session[231514]: Invalid user dev from 162.214.92.14 port 53930
Nov 29 06:40:47 compute-0 python3.9[231564]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:40:47 compute-0 sudo[231562]: pam_unix(sudo:session): session closed for user root
Nov 29 06:40:47 compute-0 sshd-session[231514]: Received disconnect from 162.214.92.14 port 53930:11: Bye Bye [preauth]
Nov 29 06:40:47 compute-0 sshd-session[231514]: Disconnected from invalid user dev 162.214.92.14 port 53930 [preauth]
Nov 29 06:40:47 compute-0 sudo[231638]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ytwjxfhephbuhoszucnbpvrymijrcmrg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398447.0138211-1615-121747224163378/AnsiballZ_stat.py'
Nov 29 06:40:47 compute-0 ceph-mon[74654]: pgmap v850: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:40:47 compute-0 sudo[231638]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:40:48 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:40:48 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:40:48 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:40:48.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:40:48 compute-0 python3.9[231640]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 06:40:48 compute-0 sudo[231638]: pam_unix(sudo:session): session closed for user root
Nov 29 06:40:48 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v851: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:40:48 compute-0 sudo[231789]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idtdctokymzibxenejzbwhgdldfegywo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398448.2444339-1615-230093002591453/AnsiballZ_copy.py'
Nov 29 06:40:48 compute-0 sudo[231789]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:40:48 compute-0 python3.9[231791]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764398448.2444339-1615-230093002591453/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:40:48 compute-0 sudo[231789]: pam_unix(sudo:session): session closed for user root
Nov 29 06:40:49 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:40:49 compute-0 sudo[231866]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nmekdtzopyrzypkzjbzmmzfefmzkualk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398448.2444339-1615-230093002591453/AnsiballZ_systemd.py'
Nov 29 06:40:49 compute-0 sudo[231866]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:40:49 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:40:49 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:40:49 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:40:49.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:40:49 compute-0 python3.9[231868]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 29 06:40:49 compute-0 systemd[1]: Reloading.
Nov 29 06:40:49 compute-0 ceph-mon[74654]: pgmap v851: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:40:49 compute-0 systemd-rc-local-generator[231894]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 06:40:49 compute-0 systemd-sysv-generator[231899]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 06:40:49 compute-0 sudo[231866]: pam_unix(sudo:session): session closed for user root
Nov 29 06:40:50 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:40:50 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:40:50 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:40:50.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:40:50 compute-0 sudo[231994]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yyyrnclcymztutgxojrzkeunegsloyxq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398448.2444339-1615-230093002591453/AnsiballZ_systemd.py'
Nov 29 06:40:50 compute-0 sudo[231994]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:40:50 compute-0 sudo[231961]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:40:50 compute-0 sudo[231961]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:40:50 compute-0 sudo[231961]: pam_unix(sudo:session): session closed for user root
Nov 29 06:40:50 compute-0 sudo[232005]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:40:50 compute-0 sudo[232005]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:40:50 compute-0 sudo[232005]: pam_unix(sudo:session): session closed for user root
Nov 29 06:40:50 compute-0 python3.9[232002]: ansible-systemd Invoked with state=restarted name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 06:40:50 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v852: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:40:50 compute-0 systemd[1]: Reloading.
Nov 29 06:40:50 compute-0 systemd-rc-local-generator[232053]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 06:40:50 compute-0 systemd-sysv-generator[232057]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 06:40:51 compute-0 systemd[1]: Starting multipathd container...
Nov 29 06:40:51 compute-0 ceph-mon[74654]: pgmap v852: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:40:51 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:40:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fe5844f7a04a75d12d37d3387717b7a1ae468d4e6f0199bcf710cee4e3c640b/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 29 06:40:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fe5844f7a04a75d12d37d3387717b7a1ae468d4e6f0199bcf710cee4e3c640b/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 29 06:40:51 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 843911ed0b6203707f0633a7e737420fbf54d55170a2d9cdc86db1752ff76af8.
Nov 29 06:40:51 compute-0 podman[232069]: 2025-11-29 06:40:51.504010264 +0000 UTC m=+0.371242857 container init 843911ed0b6203707f0633a7e737420fbf54d55170a2d9cdc86db1752ff76af8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, config_id=multipathd, container_name=multipathd)
Nov 29 06:40:51 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:40:51 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:40:51 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:40:51.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:40:51 compute-0 multipathd[232084]: + sudo -E kolla_set_configs
Nov 29 06:40:51 compute-0 podman[232069]: 2025-11-29 06:40:51.536968382 +0000 UTC m=+0.404200885 container start 843911ed0b6203707f0633a7e737420fbf54d55170a2d9cdc86db1752ff76af8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 06:40:51 compute-0 podman[232069]: multipathd
Nov 29 06:40:51 compute-0 sudo[232090]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Nov 29 06:40:51 compute-0 sudo[232090]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Nov 29 06:40:51 compute-0 sudo[232090]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Nov 29 06:40:51 compute-0 systemd[1]: Started multipathd container.
Nov 29 06:40:51 compute-0 sudo[231994]: pam_unix(sudo:session): session closed for user root
Nov 29 06:40:51 compute-0 multipathd[232084]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 29 06:40:51 compute-0 multipathd[232084]: INFO:__main__:Validating config file
Nov 29 06:40:51 compute-0 multipathd[232084]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 29 06:40:51 compute-0 multipathd[232084]: INFO:__main__:Writing out command to execute
Nov 29 06:40:51 compute-0 sudo[232090]: pam_unix(sudo:session): session closed for user root
Nov 29 06:40:51 compute-0 multipathd[232084]: ++ cat /run_command
Nov 29 06:40:51 compute-0 multipathd[232084]: + CMD='/usr/sbin/multipathd -d'
Nov 29 06:40:51 compute-0 multipathd[232084]: + ARGS=
Nov 29 06:40:51 compute-0 multipathd[232084]: + sudo kolla_copy_cacerts
Nov 29 06:40:51 compute-0 podman[232091]: 2025-11-29 06:40:51.631148581 +0000 UTC m=+0.085547841 container health_status 843911ed0b6203707f0633a7e737420fbf54d55170a2d9cdc86db1752ff76af8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 29 06:40:51 compute-0 sudo[232113]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Nov 29 06:40:51 compute-0 sudo[232113]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Nov 29 06:40:51 compute-0 sudo[232113]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Nov 29 06:40:51 compute-0 systemd[1]: 843911ed0b6203707f0633a7e737420fbf54d55170a2d9cdc86db1752ff76af8-316cda398e766f7e.service: Main process exited, code=exited, status=1/FAILURE
Nov 29 06:40:51 compute-0 systemd[1]: 843911ed0b6203707f0633a7e737420fbf54d55170a2d9cdc86db1752ff76af8-316cda398e766f7e.service: Failed with result 'exit-code'.
Nov 29 06:40:51 compute-0 sudo[232113]: pam_unix(sudo:session): session closed for user root
Nov 29 06:40:51 compute-0 multipathd[232084]: + [[ ! -n '' ]]
Nov 29 06:40:51 compute-0 multipathd[232084]: + . kolla_extend_start
Nov 29 06:40:51 compute-0 multipathd[232084]: Running command: '/usr/sbin/multipathd -d'
Nov 29 06:40:51 compute-0 multipathd[232084]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Nov 29 06:40:51 compute-0 multipathd[232084]: + umask 0022
Nov 29 06:40:51 compute-0 multipathd[232084]: + exec /usr/sbin/multipathd -d
Nov 29 06:40:51 compute-0 multipathd[232084]: 3901.902802 | --------start up--------
Nov 29 06:40:51 compute-0 multipathd[232084]: 3901.902821 | read /etc/multipath.conf
Nov 29 06:40:51 compute-0 multipathd[232084]: 3901.909830 | path checkers start up
Nov 29 06:40:52 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:40:52 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:40:52 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:40:52.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:40:52 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v853: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:40:53 compute-0 python3.9[232272]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 06:40:53 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:40:53 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:40:53 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:40:53.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:40:53 compute-0 sudo[232425]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjcarohmkurspfnwezdwmghbrftcaajs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398453.232908-1723-198654790267934/AnsiballZ_command.py'
Nov 29 06:40:53 compute-0 sudo[232425]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:40:53 compute-0 python3.9[232427]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps --filter volume=/etc/multipath.conf --format {{.Names}} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:40:53 compute-0 sudo[232425]: pam_unix(sudo:session): session closed for user root
Nov 29 06:40:54 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:40:54 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:40:54 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:40:54.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:40:54 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:40:54 compute-0 ceph-mgr[74948]: [balancer INFO root] Optimize plan auto_2025-11-29_06:40:54
Nov 29 06:40:54 compute-0 ceph-mgr[74948]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 06:40:54 compute-0 ceph-mgr[74948]: [balancer INFO root] do_upmap
Nov 29 06:40:54 compute-0 ceph-mgr[74948]: [balancer INFO root] pools ['default.rgw.control', 'volumes', 'vms', 'default.rgw.meta', '.rgw.root', 'default.rgw.log', '.mgr', 'images', 'backups', 'cephfs.cephfs.data', 'cephfs.cephfs.meta']
Nov 29 06:40:54 compute-0 ceph-mgr[74948]: [balancer INFO root] prepared 0/10 changes
Nov 29 06:40:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:40:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:40:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:40:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:40:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:40:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:40:54 compute-0 ceph-mon[74654]: pgmap v853: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:40:54 compute-0 sudo[232590]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-loichelqysxugvjfrpgzzgadnkmpmsnu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398454.190133-1747-224123297272937/AnsiballZ_systemd.py'
Nov 29 06:40:54 compute-0 sudo[232590]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:40:54 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v854: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:40:54 compute-0 python3.9[232592]: ansible-ansible.builtin.systemd Invoked with name=edpm_multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 06:40:54 compute-0 systemd[1]: Stopping multipathd container...
Nov 29 06:40:55 compute-0 multipathd[232084]: 3905.450144 | exit (signal)
Nov 29 06:40:55 compute-0 multipathd[232084]: 3905.450295 | --------shut down-------
Nov 29 06:40:55 compute-0 systemd[1]: libpod-843911ed0b6203707f0633a7e737420fbf54d55170a2d9cdc86db1752ff76af8.scope: Deactivated successfully.
Nov 29 06:40:55 compute-0 podman[232596]: 2025-11-29 06:40:55.243369879 +0000 UTC m=+0.298079494 container died 843911ed0b6203707f0633a7e737420fbf54d55170a2d9cdc86db1752ff76af8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd)
Nov 29 06:40:55 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:40:55 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:40:55 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:40:55.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:40:55 compute-0 systemd[1]: 843911ed0b6203707f0633a7e737420fbf54d55170a2d9cdc86db1752ff76af8-316cda398e766f7e.timer: Deactivated successfully.
Nov 29 06:40:55 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run 843911ed0b6203707f0633a7e737420fbf54d55170a2d9cdc86db1752ff76af8.
Nov 29 06:40:55 compute-0 ceph-mon[74654]: pgmap v854: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:40:56 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-843911ed0b6203707f0633a7e737420fbf54d55170a2d9cdc86db1752ff76af8-userdata-shm.mount: Deactivated successfully.
Nov 29 06:40:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-1fe5844f7a04a75d12d37d3387717b7a1ae468d4e6f0199bcf710cee4e3c640b-merged.mount: Deactivated successfully.
Nov 29 06:40:56 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:40:56 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:40:56 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:40:56.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:40:56 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v855: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:40:57 compute-0 podman[232596]: 2025-11-29 06:40:57.066196795 +0000 UTC m=+2.120906370 container cleanup 843911ed0b6203707f0633a7e737420fbf54d55170a2d9cdc86db1752ff76af8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd)
Nov 29 06:40:57 compute-0 podman[232596]: multipathd
Nov 29 06:40:57 compute-0 podman[232628]: multipathd
Nov 29 06:40:57 compute-0 systemd[1]: edpm_multipathd.service: Deactivated successfully.
Nov 29 06:40:57 compute-0 systemd[1]: Stopped multipathd container.
Nov 29 06:40:57 compute-0 systemd[1]: Starting multipathd container...
Nov 29 06:40:57 compute-0 sshd-session[232625]: Received disconnect from 103.143.238.173 port 38124:11: Bye Bye [preauth]
Nov 29 06:40:57 compute-0 sshd-session[232625]: Disconnected from authenticating user root 103.143.238.173 port 38124 [preauth]
Nov 29 06:40:57 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:40:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fe5844f7a04a75d12d37d3387717b7a1ae468d4e6f0199bcf710cee4e3c640b/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 29 06:40:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fe5844f7a04a75d12d37d3387717b7a1ae468d4e6f0199bcf710cee4e3c640b/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 29 06:40:57 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 843911ed0b6203707f0633a7e737420fbf54d55170a2d9cdc86db1752ff76af8.
Nov 29 06:40:57 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:40:57 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:40:57 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:40:57.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:40:57 compute-0 podman[232641]: 2025-11-29 06:40:57.662820574 +0000 UTC m=+0.500497926 container init 843911ed0b6203707f0633a7e737420fbf54d55170a2d9cdc86db1752ff76af8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 06:40:57 compute-0 multipathd[232656]: + sudo -E kolla_set_configs
Nov 29 06:40:57 compute-0 podman[232641]: 2025-11-29 06:40:57.696595045 +0000 UTC m=+0.534272337 container start 843911ed0b6203707f0633a7e737420fbf54d55170a2d9cdc86db1752ff76af8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 29 06:40:57 compute-0 sudo[232662]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Nov 29 06:40:57 compute-0 sudo[232662]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Nov 29 06:40:57 compute-0 sudo[232662]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Nov 29 06:40:57 compute-0 podman[232641]: multipathd
Nov 29 06:40:57 compute-0 systemd[1]: Started multipathd container.
Nov 29 06:40:57 compute-0 multipathd[232656]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 29 06:40:57 compute-0 multipathd[232656]: INFO:__main__:Validating config file
Nov 29 06:40:57 compute-0 multipathd[232656]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 29 06:40:57 compute-0 multipathd[232656]: INFO:__main__:Writing out command to execute
Nov 29 06:40:57 compute-0 sudo[232662]: pam_unix(sudo:session): session closed for user root
Nov 29 06:40:57 compute-0 multipathd[232656]: ++ cat /run_command
Nov 29 06:40:57 compute-0 multipathd[232656]: + CMD='/usr/sbin/multipathd -d'
Nov 29 06:40:57 compute-0 multipathd[232656]: + ARGS=
Nov 29 06:40:57 compute-0 multipathd[232656]: + sudo kolla_copy_cacerts
Nov 29 06:40:57 compute-0 sudo[232590]: pam_unix(sudo:session): session closed for user root
Nov 29 06:40:57 compute-0 sudo[232679]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Nov 29 06:40:57 compute-0 sudo[232679]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Nov 29 06:40:57 compute-0 sudo[232679]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Nov 29 06:40:57 compute-0 sudo[232679]: pam_unix(sudo:session): session closed for user root
Nov 29 06:40:57 compute-0 multipathd[232656]: + [[ ! -n '' ]]
Nov 29 06:40:57 compute-0 multipathd[232656]: + . kolla_extend_start
Nov 29 06:40:57 compute-0 multipathd[232656]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Nov 29 06:40:57 compute-0 multipathd[232656]: Running command: '/usr/sbin/multipathd -d'
Nov 29 06:40:57 compute-0 multipathd[232656]: + umask 0022
Nov 29 06:40:57 compute-0 multipathd[232656]: + exec /usr/sbin/multipathd -d
Nov 29 06:40:57 compute-0 podman[232663]: 2025-11-29 06:40:57.833794181 +0000 UTC m=+0.124355967 container health_status 843911ed0b6203707f0633a7e737420fbf54d55170a2d9cdc86db1752ff76af8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 29 06:40:57 compute-0 multipathd[232656]: 3908.078309 | --------start up--------
Nov 29 06:40:57 compute-0 multipathd[232656]: 3908.078330 | read /etc/multipath.conf
Nov 29 06:40:57 compute-0 systemd[1]: 843911ed0b6203707f0633a7e737420fbf54d55170a2d9cdc86db1752ff76af8-41be233bebe0a7b2.service: Main process exited, code=exited, status=1/FAILURE
Nov 29 06:40:57 compute-0 systemd[1]: 843911ed0b6203707f0633a7e737420fbf54d55170a2d9cdc86db1752ff76af8-41be233bebe0a7b2.service: Failed with result 'exit-code'.
Nov 29 06:40:57 compute-0 multipathd[232656]: 3908.083962 | path checkers start up
Nov 29 06:40:58 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:40:58 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:40:58 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:40:58.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:40:58 compute-0 ceph-mon[74654]: pgmap v855: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:40:58 compute-0 sudo[232847]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ccikdwpaigtpxbrupyjeyhpxlpdjjiad ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398458.1554193-1771-149622822466239/AnsiballZ_file.py'
Nov 29 06:40:58 compute-0 sudo[232847]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:40:58 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v856: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:40:58 compute-0 python3.9[232849]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:40:58 compute-0 sudo[232847]: pam_unix(sudo:session): session closed for user root
Nov 29 06:40:59 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:40:59 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:40:59 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:40:59 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:40:59.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:40:59 compute-0 sudo[233002]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-papwyjivvziervlrvhimncuwbltsjzin ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398459.469461-1807-183063584583247/AnsiballZ_file.py'
Nov 29 06:40:59 compute-0 sudo[233002]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:40:59 compute-0 python3.9[233004]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 29 06:40:59 compute-0 sudo[233002]: pam_unix(sudo:session): session closed for user root
Nov 29 06:41:00 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:41:00 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:41:00 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:41:00.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:41:00 compute-0 sshd-session[232875]: Received disconnect from 176.109.67.96 port 53218:11: Bye Bye [preauth]
Nov 29 06:41:00 compute-0 sshd-session[232875]: Disconnected from authenticating user root 176.109.67.96 port 53218 [preauth]
Nov 29 06:41:00 compute-0 sudo[233154]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-erjrnvlwdfgjzpzmebalubqnryzionmn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398460.287208-1831-64346203694098/AnsiballZ_modprobe.py'
Nov 29 06:41:00 compute-0 sudo[233154]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:41:00 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v857: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:41:00 compute-0 ceph-mon[74654]: pgmap v856: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:41:00 compute-0 python3.9[233156]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Nov 29 06:41:00 compute-0 kernel: Key type psk registered
Nov 29 06:41:00 compute-0 sudo[233154]: pam_unix(sudo:session): session closed for user root
Nov 29 06:41:01 compute-0 radosgw[93592]: INFO: RGWReshardLock::lock found lock on reshard.0000000005 to be held by another RGW process; skipping for now
Nov 29 06:41:01 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:41:01 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:41:01 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:41:01.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:41:01 compute-0 sudo[233317]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjoqihhxgsdjdkxmppcnwzgkqtjvjbpi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398461.168393-1855-112759495637085/AnsiballZ_stat.py'
Nov 29 06:41:01 compute-0 sudo[233317]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:41:01 compute-0 python3.9[233319]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:41:01 compute-0 sudo[233317]: pam_unix(sudo:session): session closed for user root
Nov 29 06:41:01 compute-0 ceph-mon[74654]: pgmap v857: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:41:02 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:41:02 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:41:02 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:41:02.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:41:02 compute-0 sudo[233440]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cobyqbzrevciyddkkbrrlregsvvfpzbx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398461.168393-1855-112759495637085/AnsiballZ_copy.py'
Nov 29 06:41:02 compute-0 sudo[233440]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:41:02 compute-0 python3.9[233442]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764398461.168393-1855-112759495637085/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:41:02 compute-0 sudo[233440]: pam_unix(sudo:session): session closed for user root
Nov 29 06:41:02 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v858: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 4.8 KiB/s rd, 0 B/s wr, 7 op/s
Nov 29 06:41:03 compute-0 sudo[233593]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wxyayhdzteqtnhkwdkuoolijqfhkrpps ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398462.8781273-1903-246523764418752/AnsiballZ_lineinfile.py'
Nov 29 06:41:03 compute-0 sudo[233593]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:41:03 compute-0 python3.9[233595]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:41:03 compute-0 sudo[233593]: pam_unix(sudo:session): session closed for user root
Nov 29 06:41:03 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:41:03 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:41:03 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:41:03.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:41:04 compute-0 ceph-mon[74654]: pgmap v858: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 4.8 KiB/s rd, 0 B/s wr, 7 op/s
Nov 29 06:41:04 compute-0 sudo[233745]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bkffrjqugqzvkeecmqjgcaaqcproeluv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398463.7513785-1927-33816148901323/AnsiballZ_systemd.py'
Nov 29 06:41:04 compute-0 sudo[233745]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:41:04 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:41:04 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:41:04 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:41:04.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:41:04 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:41:04 compute-0 python3.9[233747]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 06:41:04 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Nov 29 06:41:04 compute-0 systemd[1]: Stopped Load Kernel Modules.
Nov 29 06:41:04 compute-0 systemd[1]: Stopping Load Kernel Modules...
Nov 29 06:41:04 compute-0 systemd[1]: Starting Load Kernel Modules...
Nov 29 06:41:04 compute-0 systemd[1]: Finished Load Kernel Modules.
Nov 29 06:41:04 compute-0 sudo[233745]: pam_unix(sudo:session): session closed for user root
Nov 29 06:41:04 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v859: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 29 06:41:05 compute-0 sudo[233902]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vsyklpipdaffqsvirxjcvilitsbrwhyy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398464.8863037-1951-189688446709898/AnsiballZ_dnf.py'
Nov 29 06:41:05 compute-0 sudo[233902]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:41:05 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:41:05 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:41:05 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:41:05.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:41:05 compute-0 python3.9[233904]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 06:41:05 compute-0 ceph-mon[74654]: pgmap v859: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 29 06:41:06 compute-0 podman[233906]: 2025-11-29 06:41:06.150149289 +0000 UTC m=+0.097821405 container health_status 81ea2bcb89266a0110a379c2083d8cc042460d4a35c8ed3bf349dd1083925000 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Nov 29 06:41:06 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:41:06 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:41:06 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:41:06.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:41:06 compute-0 podman[233907]: 2025-11-29 06:41:06.214248681 +0000 UTC m=+0.161930358 container health_status b3f42e9a710907b47913576d27471d163da731262c1464357cff24681ce600c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 29 06:41:06 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v860: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 80 KiB/s rd, 0 B/s wr, 132 op/s
Nov 29 06:41:07 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:41:07 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:41:07 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:41:07.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:41:07 compute-0 ceph-mon[74654]: pgmap v860: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 80 KiB/s rd, 0 B/s wr, 132 op/s
Nov 29 06:41:08 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:41:08 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:41:08 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:41:08.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:41:08 compute-0 systemd[1]: Reloading.
Nov 29 06:41:08 compute-0 systemd-rc-local-generator[233970]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 06:41:08 compute-0 systemd-sysv-generator[233980]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 06:41:08 compute-0 systemd[1]: Reloading.
Nov 29 06:41:08 compute-0 systemd-rc-local-generator[234009]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 06:41:08 compute-0 systemd-sysv-generator[234014]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 06:41:08 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v861: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 107 KiB/s rd, 0 B/s wr, 178 op/s
Nov 29 06:41:09 compute-0 systemd-logind[797]: Watching system buttons on /dev/input/event0 (Power Button)
Nov 29 06:41:09 compute-0 systemd-logind[797]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Nov 29 06:41:09 compute-0 lvm[234062]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 29 06:41:09 compute-0 lvm[234062]: VG ceph_vg0 finished
Nov 29 06:41:09 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:41:09 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:41:09 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:41:09 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:41:09.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:41:09 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 29 06:41:09 compute-0 systemd[1]: Starting man-db-cache-update.service...
Nov 29 06:41:09 compute-0 systemd[1]: Reloading.
Nov 29 06:41:09 compute-0 systemd-rc-local-generator[234112]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 06:41:09 compute-0 systemd-sysv-generator[234115]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 06:41:09 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 29 06:41:10 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:41:10 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:41:10 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:41:10.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:41:10 compute-0 ceph-mon[74654]: pgmap v861: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 107 KiB/s rd, 0 B/s wr, 178 op/s
Nov 29 06:41:10 compute-0 sudo[234122]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:41:10 compute-0 sudo[234122]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:41:10 compute-0 sudo[234122]: pam_unix(sudo:session): session closed for user root
Nov 29 06:41:10 compute-0 sudo[234147]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:41:10 compute-0 sudo[234147]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:41:10 compute-0 sudo[234147]: pam_unix(sudo:session): session closed for user root
Nov 29 06:41:10 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v862: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 107 KiB/s rd, 0 B/s wr, 178 op/s
Nov 29 06:41:11 compute-0 sudo[233902]: pam_unix(sudo:session): session closed for user root
Nov 29 06:41:11 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:41:11 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:41:11 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:41:11.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:41:11 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 29 06:41:11 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 29 06:41:11 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1.583s CPU time.
Nov 29 06:41:11 compute-0 systemd[1]: run-r6ee5c51bd5404b4996ca3bf7ce05adef.service: Deactivated successfully.
Nov 29 06:41:11 compute-0 sshd-session[234689]: Received disconnect from 197.13.24.157 port 34092:11: Bye Bye [preauth]
Nov 29 06:41:11 compute-0 sshd-session[234689]: Disconnected from authenticating user root 197.13.24.157 port 34092 [preauth]
Nov 29 06:41:12 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:41:12 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:41:12 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:41:12.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:41:12 compute-0 ceph-mon[74654]: pgmap v862: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 107 KiB/s rd, 0 B/s wr, 178 op/s
Nov 29 06:41:12 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v863: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 107 KiB/s rd, 0 B/s wr, 178 op/s
Nov 29 06:41:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 06:41:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:41:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 06:41:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:41:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:41:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:41:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:41:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:41:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:41:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:41:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:41:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:41:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 06:41:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:41:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:41:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:41:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Nov 29 06:41:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:41:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 06:41:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:41:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:41:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:41:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 06:41:13 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:41:13 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:41:13 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:41:13.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:41:13 compute-0 ceph-mon[74654]: pgmap v863: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 107 KiB/s rd, 0 B/s wr, 178 op/s
Nov 29 06:41:13 compute-0 sudo[235454]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-huzeyuqffhgujviodfervjkhrwzqerbn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398473.337711-1975-72804963827305/AnsiballZ_systemd_service.py'
Nov 29 06:41:13 compute-0 sudo[235454]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:41:13 compute-0 python3.9[235456]: ansible-ansible.builtin.systemd_service Invoked with name=iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 06:41:14 compute-0 systemd[1]: Stopping Open-iSCSI...
Nov 29 06:41:14 compute-0 iscsid[222530]: iscsid shutting down.
Nov 29 06:41:14 compute-0 systemd[1]: iscsid.service: Deactivated successfully.
Nov 29 06:41:14 compute-0 systemd[1]: Stopped Open-iSCSI.
Nov 29 06:41:14 compute-0 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Nov 29 06:41:14 compute-0 systemd[1]: Starting Open-iSCSI...
Nov 29 06:41:14 compute-0 systemd[1]: Started Open-iSCSI.
Nov 29 06:41:14 compute-0 sudo[235454]: pam_unix(sudo:session): session closed for user root
Nov 29 06:41:14 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:41:14 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:41:14 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:41:14.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:41:14 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:41:14 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v864: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 102 KiB/s rd, 0 B/s wr, 170 op/s
Nov 29 06:41:15 compute-0 python3.9[235610]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 06:41:15 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:41:15 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:41:15 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:41:15.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:41:15 compute-0 sudo[235765]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-abzlbejdmtjmxcfeqzjakerscgnjcnuk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398475.5932486-2027-15070813718741/AnsiballZ_file.py'
Nov 29 06:41:15 compute-0 sudo[235765]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:41:16 compute-0 python3.9[235767]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:41:16 compute-0 sudo[235765]: pam_unix(sudo:session): session closed for user root
Nov 29 06:41:16 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:41:16 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:41:16 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:41:16.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:41:16 compute-0 ceph-mon[74654]: pgmap v864: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 102 KiB/s rd, 0 B/s wr, 170 op/s
Nov 29 06:41:16 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v865: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 71 KiB/s rd, 0 B/s wr, 119 op/s
Nov 29 06:41:17 compute-0 sudo[235918]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljkocpaamtpefciauxmfjwtvwzgwxpfx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398476.799276-2060-76431684217125/AnsiballZ_systemd_service.py'
Nov 29 06:41:17 compute-0 sudo[235918]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:41:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:41:17.228 157767 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 06:41:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:41:17.230 157767 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 06:41:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:41:17.230 157767 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 06:41:17 compute-0 python3.9[235920]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 29 06:41:17 compute-0 systemd[1]: Reloading.
Nov 29 06:41:17 compute-0 systemd-rc-local-generator[235944]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 06:41:17 compute-0 systemd-sysv-generator[235950]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 06:41:17 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:41:17 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:41:17 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:41:17.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:41:17 compute-0 ceph-mon[74654]: pgmap v865: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 71 KiB/s rd, 0 B/s wr, 119 op/s
Nov 29 06:41:17 compute-0 sudo[235918]: pam_unix(sudo:session): session closed for user root
Nov 29 06:41:18 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:41:18 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:41:18 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:41:18.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:41:18 compute-0 python3.9[236105]: ansible-ansible.builtin.service_facts Invoked
Nov 29 06:41:18 compute-0 network[236122]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 29 06:41:18 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v866: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 27 KiB/s rd, 0 B/s wr, 45 op/s
Nov 29 06:41:18 compute-0 network[236123]: 'network-scripts' will be removed from distribution in near future.
Nov 29 06:41:18 compute-0 network[236124]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 29 06:41:19 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:41:19 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:41:19 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:41:19 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:41:19.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:41:20 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:41:20 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:41:20 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:41:20.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:41:20 compute-0 ceph-mon[74654]: pgmap v866: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 27 KiB/s rd, 0 B/s wr, 45 op/s
Nov 29 06:41:20 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v867: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:41:21 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:41:21 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:41:21 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:41:21.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:41:21 compute-0 ceph-mon[74654]: pgmap v867: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:41:22 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:41:22 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:41:22 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:41:22.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:41:22 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v868: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:41:23 compute-0 sshd-session[236215]: Received disconnect from 34.92.81.41 port 37708:11: Bye Bye [preauth]
Nov 29 06:41:23 compute-0 sshd-session[236215]: Disconnected from authenticating user root 34.92.81.41 port 37708 [preauth]
Nov 29 06:41:23 compute-0 sudo[236402]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rzmckwqqkvydpxrchanxklzasqvwviln ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398483.149233-2117-204828118820709/AnsiballZ_systemd_service.py'
Nov 29 06:41:23 compute-0 sudo[236402]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:41:23 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:41:23 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:41:23 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:41:23.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:41:23 compute-0 python3.9[236404]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 06:41:23 compute-0 sudo[236402]: pam_unix(sudo:session): session closed for user root
Nov 29 06:41:24 compute-0 ceph-mon[74654]: pgmap v868: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:41:24 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:41:24 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:41:24 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:41:24.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:41:24 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:41:24 compute-0 sudo[236555]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xkldlddcudufvsglcjugbpfjedatyxzw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398483.9247782-2117-48261134510990/AnsiballZ_systemd_service.py'
Nov 29 06:41:24 compute-0 sudo[236555]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:41:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:41:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:41:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:41:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:41:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:41:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:41:24 compute-0 python3.9[236557]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 06:41:24 compute-0 sudo[236555]: pam_unix(sudo:session): session closed for user root
Nov 29 06:41:24 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v869: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:41:25 compute-0 sudo[236709]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uyloellhoxgjmtvjpdfaifrqzukeuihr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398484.7641168-2117-136433527231645/AnsiballZ_systemd_service.py'
Nov 29 06:41:25 compute-0 sudo[236709]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:41:25 compute-0 python3.9[236711]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 06:41:25 compute-0 sudo[236709]: pam_unix(sudo:session): session closed for user root
Nov 29 06:41:25 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:41:25 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:41:25 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:41:25.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:41:25 compute-0 ceph-mon[74654]: pgmap v869: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:41:26 compute-0 sudo[236862]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfajxutdgwcasvmrqwkhldgzzfbtavog ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398485.6952782-2117-16739040703653/AnsiballZ_systemd_service.py'
Nov 29 06:41:26 compute-0 sudo[236862]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:41:26 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:41:26 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:41:26 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:41:26.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:41:26 compute-0 python3.9[236864]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 06:41:26 compute-0 sudo[236862]: pam_unix(sudo:session): session closed for user root
Nov 29 06:41:26 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v870: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:41:26 compute-0 sudo[237017]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fvztndfxyqokscszukspwqqlwlvhipfa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398486.6832876-2117-3225615171127/AnsiballZ_systemd_service.py'
Nov 29 06:41:26 compute-0 sudo[237017]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:41:27 compute-0 python3.9[237019]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 06:41:27 compute-0 sudo[237017]: pam_unix(sudo:session): session closed for user root
Nov 29 06:41:27 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:41:27 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:41:27 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:41:27.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:41:27 compute-0 sudo[237170]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tcnmluxtbyskikqwgskkjgrmfwypxixm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398487.3984833-2117-215987104828576/AnsiballZ_systemd_service.py'
Nov 29 06:41:27 compute-0 sudo[237170]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:41:28 compute-0 python3.9[237172]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 06:41:28 compute-0 sudo[237170]: pam_unix(sudo:session): session closed for user root
Nov 29 06:41:28 compute-0 podman[237173]: 2025-11-29 06:41:28.10877708 +0000 UTC m=+0.076815352 container health_status 843911ed0b6203707f0633a7e737420fbf54d55170a2d9cdc86db1752ff76af8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 06:41:28 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:41:28 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:41:28 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:41:28.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:41:28 compute-0 ceph-mon[74654]: pgmap v870: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:41:28 compute-0 sudo[237344]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xiavpzbwzobgohajhzcovvolgapmaddx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398488.2219915-2117-170837853500359/AnsiballZ_systemd_service.py'
Nov 29 06:41:28 compute-0 sudo[237344]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:41:28 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v871: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:41:28 compute-0 python3.9[237346]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 06:41:28 compute-0 sudo[237344]: pam_unix(sudo:session): session closed for user root
Nov 29 06:41:29 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:41:29 compute-0 sudo[237500]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-msnjybyheajolhrrtqjhmkgclovifomb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398489.1132817-2117-176018157804287/AnsiballZ_systemd_service.py'
Nov 29 06:41:29 compute-0 sudo[237500]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:41:29 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:41:29 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:41:29 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:41:29.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:41:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 06:41:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 06:41:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 06:41:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 06:41:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 06:41:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 06:41:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 06:41:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 06:41:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 06:41:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 06:41:29 compute-0 python3.9[237502]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 06:41:29 compute-0 sudo[237500]: pam_unix(sudo:session): session closed for user root
Nov 29 06:41:30 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:41:30 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:41:30 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:41:30.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:41:30 compute-0 sshd-session[237425]: Invalid user ubuntu from 49.247.35.31 port 51196
Nov 29 06:41:30 compute-0 sshd-session[236872]: Invalid user es from 58.210.98.130 port 63881
Nov 29 06:41:30 compute-0 sshd-session[237425]: Received disconnect from 49.247.35.31 port 51196:11: Bye Bye [preauth]
Nov 29 06:41:30 compute-0 sshd-session[237425]: Disconnected from invalid user ubuntu 49.247.35.31 port 51196 [preauth]
Nov 29 06:41:30 compute-0 ceph-mon[74654]: pgmap v871: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:41:30 compute-0 sshd-session[236872]: Received disconnect from 58.210.98.130 port 63881:11: Bye Bye [preauth]
Nov 29 06:41:30 compute-0 sshd-session[236872]: Disconnected from invalid user es 58.210.98.130 port 63881 [preauth]
Nov 29 06:41:30 compute-0 sudo[237657]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jcsepzjmyihogwytjapaopqytbwxjjqp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398490.2931721-2294-181500976174469/AnsiballZ_file.py'
Nov 29 06:41:30 compute-0 sudo[237657]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:41:30 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v872: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:41:30 compute-0 sudo[237652]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:41:30 compute-0 sudo[237652]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:41:30 compute-0 sudo[237652]: pam_unix(sudo:session): session closed for user root
Nov 29 06:41:30 compute-0 sudo[237682]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:41:30 compute-0 sudo[237682]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:41:30 compute-0 sudo[237682]: pam_unix(sudo:session): session closed for user root
Nov 29 06:41:31 compute-0 python3.9[237676]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:41:31 compute-0 sudo[237657]: pam_unix(sudo:session): session closed for user root
Nov 29 06:41:31 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:41:31 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:41:31 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:41:31.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:41:31 compute-0 sudo[237857]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cycqlnhchyzygkoqyyceghyqnfrridcx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398491.2552822-2294-44886270010538/AnsiballZ_file.py'
Nov 29 06:41:31 compute-0 sudo[237857]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:41:31 compute-0 python3.9[237859]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:41:31 compute-0 sudo[237857]: pam_unix(sudo:session): session closed for user root
Nov 29 06:41:32 compute-0 ceph-mon[74654]: pgmap v872: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:41:32 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:41:32 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:41:32 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:41:32.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:41:32 compute-0 sudo[238009]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qnwxgdqunrmxswfuktdafecwsglrobll ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398492.1402786-2294-211812463506067/AnsiballZ_file.py'
Nov 29 06:41:32 compute-0 sudo[238009]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:41:32 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v873: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:41:33 compute-0 ceph-mon[74654]: pgmap v873: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:41:33 compute-0 python3.9[238011]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:41:33 compute-0 sudo[238009]: pam_unix(sudo:session): session closed for user root
Nov 29 06:41:33 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:41:33 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:41:33 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:41:33.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:41:33 compute-0 sudo[238162]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nnatwnbnchrwcmftwzhcjugwqsbtmymu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398493.4793572-2294-39135510600188/AnsiballZ_file.py'
Nov 29 06:41:33 compute-0 sudo[238162]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:41:34 compute-0 python3.9[238164]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:41:34 compute-0 sudo[238162]: pam_unix(sudo:session): session closed for user root
Nov 29 06:41:34 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:41:34 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:41:34 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:41:34.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:41:34 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:41:34 compute-0 sudo[238314]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwyhiryeltnirhvavpwmspawleuxvsgu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398494.1664262-2294-13992010816522/AnsiballZ_file.py'
Nov 29 06:41:34 compute-0 sudo[238314]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:41:34 compute-0 python3.9[238316]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:41:34 compute-0 sudo[238314]: pam_unix(sudo:session): session closed for user root
Nov 29 06:41:34 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v874: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:41:35 compute-0 sudo[238467]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dtgdvppaucrtjnffdnbxbpynrwnyoqzv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398494.780461-2294-143040147369476/AnsiballZ_file.py'
Nov 29 06:41:35 compute-0 sudo[238467]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:41:35 compute-0 python3.9[238469]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:41:35 compute-0 sudo[238467]: pam_unix(sudo:session): session closed for user root
Nov 29 06:41:35 compute-0 ceph-mon[74654]: pgmap v874: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:41:35 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:41:35 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:41:35 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:41:35.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:41:35 compute-0 sudo[238619]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wdainxgzysulytwpkomusjxbfrghdjow ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398495.451946-2294-51793648840616/AnsiballZ_file.py'
Nov 29 06:41:35 compute-0 sudo[238619]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:41:35 compute-0 python3.9[238621]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:41:36 compute-0 sudo[238619]: pam_unix(sudo:session): session closed for user root
Nov 29 06:41:36 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:41:36 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:41:36 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:41:36.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:41:36 compute-0 sudo[238721]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:41:36 compute-0 sudo[238721]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:41:36 compute-0 sudo[238721]: pam_unix(sudo:session): session closed for user root
Nov 29 06:41:36 compute-0 sudo[238776]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:41:36 compute-0 sudo[238776]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:41:36 compute-0 sudo[238776]: pam_unix(sudo:session): session closed for user root
Nov 29 06:41:36 compute-0 podman[238751]: 2025-11-29 06:41:36.445195937 +0000 UTC m=+0.058014940 container health_status 81ea2bcb89266a0110a379c2083d8cc042460d4a35c8ed3bf349dd1083925000 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 06:41:36 compute-0 sudo[238866]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zurpninlyqroumklrgikwueokimtcanx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398496.1560876-2294-141104613312547/AnsiballZ_file.py'
Nov 29 06:41:36 compute-0 podman[238763]: 2025-11-29 06:41:36.485122635 +0000 UTC m=+0.102539748 container health_status b3f42e9a710907b47913576d27471d163da731262c1464357cff24681ce600c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 29 06:41:36 compute-0 sudo[238866]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:41:36 compute-0 sudo[238860]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:41:36 compute-0 sudo[238860]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:41:36 compute-0 sudo[238860]: pam_unix(sudo:session): session closed for user root
Nov 29 06:41:36 compute-0 sudo[238894]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 06:41:36 compute-0 sudo[238894]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:41:36 compute-0 python3.9[238888]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:41:36 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v875: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:41:36 compute-0 sudo[238866]: pam_unix(sudo:session): session closed for user root
Nov 29 06:41:37 compute-0 sudo[238894]: pam_unix(sudo:session): session closed for user root
Nov 29 06:41:37 compute-0 sudo[239100]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zhqryxdldhoebsjcvvvxwenbjxrwdpdv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398496.8704937-2465-34927792491938/AnsiballZ_file.py'
Nov 29 06:41:37 compute-0 sudo[239100]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:41:37 compute-0 python3.9[239102]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:41:37 compute-0 sudo[239100]: pam_unix(sudo:session): session closed for user root
Nov 29 06:41:37 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:41:37 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:41:37 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:41:37.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:41:37 compute-0 ceph-mon[74654]: pgmap v875: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:41:37 compute-0 sudo[239252]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oebnweghwlyqbclevpknteraarmpxbbs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398497.5303066-2465-257618193837332/AnsiballZ_file.py'
Nov 29 06:41:37 compute-0 sudo[239252]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:41:38 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 06:41:38 compute-0 python3.9[239254]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:41:38 compute-0 sudo[239252]: pam_unix(sudo:session): session closed for user root
Nov 29 06:41:38 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:41:38 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 06:41:38 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:41:38 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:41:38 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:41:38 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:41:38.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:41:38 compute-0 sudo[239406]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jbrbupxveftgwugupculjfkbaicsphnn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398498.2983618-2465-99117001232382/AnsiballZ_file.py'
Nov 29 06:41:38 compute-0 sudo[239406]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:41:38 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v876: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:41:38 compute-0 python3.9[239408]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:41:38 compute-0 sudo[239406]: pam_unix(sudo:session): session closed for user root
Nov 29 06:41:38 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Nov 29 06:41:38 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 29 06:41:38 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 06:41:38 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:41:38 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 06:41:38 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 06:41:38 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 06:41:38 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:41:38 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev 4e0edc45-8c0b-47fa-b728-02ba9928b432 does not exist
Nov 29 06:41:38 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev 9aeb3286-8032-4305-aa06-384a680ed67c does not exist
Nov 29 06:41:38 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev d39aece6-ed99-4e0f-abb0-7fada5d9c3a6 does not exist
Nov 29 06:41:38 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 06:41:38 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 06:41:38 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 06:41:38 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 06:41:38 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 06:41:38 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:41:39 compute-0 sudo[239433]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:41:39 compute-0 sudo[239433]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:41:39 compute-0 sudo[239433]: pam_unix(sudo:session): session closed for user root
Nov 29 06:41:39 compute-0 sudo[239464]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:41:39 compute-0 sudo[239464]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:41:39 compute-0 sudo[239464]: pam_unix(sudo:session): session closed for user root
Nov 29 06:41:39 compute-0 sudo[239511]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:41:39 compute-0 sudo[239511]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:41:39 compute-0 sudo[239511]: pam_unix(sudo:session): session closed for user root
Nov 29 06:41:39 compute-0 sudo[239562]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Nov 29 06:41:39 compute-0 sudo[239562]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:41:39 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:41:39 compute-0 sudo[239659]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhveicwkemxzrmabnjveoskwszjnjyrd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398499.0618212-2465-264802553012937/AnsiballZ_file.py'
Nov 29 06:41:39 compute-0 sudo[239659]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:41:39 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:41:39 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:41:39 compute-0 ceph-mon[74654]: pgmap v876: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:41:39 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 29 06:41:39 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:41:39 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 06:41:39 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:41:39 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 06:41:39 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 06:41:39 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:41:39 compute-0 python3.9[239671]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:41:39 compute-0 sudo[239659]: pam_unix(sudo:session): session closed for user root
Nov 29 06:41:39 compute-0 podman[239699]: 2025-11-29 06:41:39.573871723 +0000 UTC m=+0.049959303 container create 769f3eeea9b5d798df4670a98e90337e560f1c3e7b5ae48a9ef979937c9dbaae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_mendeleev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 29 06:41:39 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:41:39 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:41:39 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:41:39.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:41:39 compute-0 systemd[1]: Started libpod-conmon-769f3eeea9b5d798df4670a98e90337e560f1c3e7b5ae48a9ef979937c9dbaae.scope.
Nov 29 06:41:39 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:41:39 compute-0 podman[239699]: 2025-11-29 06:41:39.552019996 +0000 UTC m=+0.028107586 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:41:39 compute-0 podman[239699]: 2025-11-29 06:41:39.65937537 +0000 UTC m=+0.135462980 container init 769f3eeea9b5d798df4670a98e90337e560f1c3e7b5ae48a9ef979937c9dbaae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_mendeleev, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 06:41:39 compute-0 podman[239699]: 2025-11-29 06:41:39.668349723 +0000 UTC m=+0.144437303 container start 769f3eeea9b5d798df4670a98e90337e560f1c3e7b5ae48a9ef979937c9dbaae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_mendeleev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True)
Nov 29 06:41:39 compute-0 podman[239699]: 2025-11-29 06:41:39.672033127 +0000 UTC m=+0.148120707 container attach 769f3eeea9b5d798df4670a98e90337e560f1c3e7b5ae48a9ef979937c9dbaae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_mendeleev, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 06:41:39 compute-0 cool_mendeleev[239739]: 167 167
Nov 29 06:41:39 compute-0 systemd[1]: libpod-769f3eeea9b5d798df4670a98e90337e560f1c3e7b5ae48a9ef979937c9dbaae.scope: Deactivated successfully.
Nov 29 06:41:39 compute-0 podman[239699]: 2025-11-29 06:41:39.675994899 +0000 UTC m=+0.152082479 container died 769f3eeea9b5d798df4670a98e90337e560f1c3e7b5ae48a9ef979937c9dbaae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_mendeleev, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 29 06:41:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-4476dec3443a9e54e49e8fca746b2ace259f6a8fa23ebecbfed0c48376d42f8c-merged.mount: Deactivated successfully.
Nov 29 06:41:39 compute-0 podman[239699]: 2025-11-29 06:41:39.718249653 +0000 UTC m=+0.194337233 container remove 769f3eeea9b5d798df4670a98e90337e560f1c3e7b5ae48a9ef979937c9dbaae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_mendeleev, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS)
Nov 29 06:41:39 compute-0 systemd[1]: libpod-conmon-769f3eeea9b5d798df4670a98e90337e560f1c3e7b5ae48a9ef979937c9dbaae.scope: Deactivated successfully.
Nov 29 06:41:39 compute-0 sshd-session[239355]: Invalid user app from 118.193.39.127 port 51910
Nov 29 06:41:39 compute-0 podman[239839]: 2025-11-29 06:41:39.910400584 +0000 UTC m=+0.054289946 container create 4d4f0031d04e934f4f8af790e3b6c06e446dbc29ab9b1e79f50efdea68697f2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_black, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 29 06:41:39 compute-0 systemd[1]: Started libpod-conmon-4d4f0031d04e934f4f8af790e3b6c06e446dbc29ab9b1e79f50efdea68697f2c.scope.
Nov 29 06:41:39 compute-0 podman[239839]: 2025-11-29 06:41:39.886635742 +0000 UTC m=+0.030525114 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:41:39 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:41:39 compute-0 sshd-session[239355]: Received disconnect from 118.193.39.127 port 51910:11: Bye Bye [preauth]
Nov 29 06:41:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b301fe45acc6d28d7b2b354d17cd99724ab89e5466a3b3ed342e73e78aa6df6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 06:41:39 compute-0 sshd-session[239355]: Disconnected from invalid user app 118.193.39.127 port 51910 [preauth]
Nov 29 06:41:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b301fe45acc6d28d7b2b354d17cd99724ab89e5466a3b3ed342e73e78aa6df6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:41:39 compute-0 sudo[239908]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lpjcuxbwkyoydxpihutygbdvgzanmtne ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398499.672673-2465-185937219817883/AnsiballZ_file.py'
Nov 29 06:41:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b301fe45acc6d28d7b2b354d17cd99724ab89e5466a3b3ed342e73e78aa6df6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:41:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b301fe45acc6d28d7b2b354d17cd99724ab89e5466a3b3ed342e73e78aa6df6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 06:41:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b301fe45acc6d28d7b2b354d17cd99724ab89e5466a3b3ed342e73e78aa6df6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 06:41:39 compute-0 sudo[239908]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:41:40 compute-0 podman[239839]: 2025-11-29 06:41:40.005329066 +0000 UTC m=+0.149218438 container init 4d4f0031d04e934f4f8af790e3b6c06e446dbc29ab9b1e79f50efdea68697f2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_black, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 06:41:40 compute-0 podman[239839]: 2025-11-29 06:41:40.022295606 +0000 UTC m=+0.166184938 container start 4d4f0031d04e934f4f8af790e3b6c06e446dbc29ab9b1e79f50efdea68697f2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_black, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 06:41:40 compute-0 podman[239839]: 2025-11-29 06:41:40.026528215 +0000 UTC m=+0.170417537 container attach 4d4f0031d04e934f4f8af790e3b6c06e446dbc29ab9b1e79f50efdea68697f2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_black, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 29 06:41:40 compute-0 python3.9[239910]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:41:40 compute-0 sudo[239908]: pam_unix(sudo:session): session closed for user root
Nov 29 06:41:40 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:41:40 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:41:40 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:41:40.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:41:40 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v877: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:41:40 compute-0 sudo[240069]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ngxewdzsudktvehnxsdfaliijiujdkps ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398500.4792235-2465-277041393074425/AnsiballZ_file.py'
Nov 29 06:41:40 compute-0 sudo[240069]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:41:40 compute-0 stupefied_black[239899]: --> passed data devices: 0 physical, 1 LVM
Nov 29 06:41:40 compute-0 stupefied_black[239899]: --> relative data size: 1.0
Nov 29 06:41:40 compute-0 stupefied_black[239899]: --> All data devices are unavailable
Nov 29 06:41:40 compute-0 systemd[1]: libpod-4d4f0031d04e934f4f8af790e3b6c06e446dbc29ab9b1e79f50efdea68697f2c.scope: Deactivated successfully.
Nov 29 06:41:40 compute-0 podman[239839]: 2025-11-29 06:41:40.931661865 +0000 UTC m=+1.075551227 container died 4d4f0031d04e934f4f8af790e3b6c06e446dbc29ab9b1e79f50efdea68697f2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_black, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 06:41:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-7b301fe45acc6d28d7b2b354d17cd99724ab89e5466a3b3ed342e73e78aa6df6-merged.mount: Deactivated successfully.
Nov 29 06:41:41 compute-0 podman[239839]: 2025-11-29 06:41:41.159996137 +0000 UTC m=+1.303885499 container remove 4d4f0031d04e934f4f8af790e3b6c06e446dbc29ab9b1e79f50efdea68697f2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_black, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 29 06:41:41 compute-0 systemd[1]: libpod-conmon-4d4f0031d04e934f4f8af790e3b6c06e446dbc29ab9b1e79f50efdea68697f2c.scope: Deactivated successfully.
Nov 29 06:41:41 compute-0 python3.9[240073]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:41:41 compute-0 sudo[239562]: pam_unix(sudo:session): session closed for user root
Nov 29 06:41:41 compute-0 sudo[240069]: pam_unix(sudo:session): session closed for user root
Nov 29 06:41:41 compute-0 ceph-mon[74654]: pgmap v877: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:41:41 compute-0 sudo[240088]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:41:41 compute-0 sudo[240088]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:41:41 compute-0 sudo[240088]: pam_unix(sudo:session): session closed for user root
Nov 29 06:41:41 compute-0 sudo[240137]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:41:41 compute-0 sudo[240137]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:41:41 compute-0 sudo[240137]: pam_unix(sudo:session): session closed for user root
Nov 29 06:41:41 compute-0 sudo[240191]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:41:41 compute-0 sudo[240191]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:41:41 compute-0 sudo[240191]: pam_unix(sudo:session): session closed for user root
Nov 29 06:41:41 compute-0 sudo[240240]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -- lvm list --format json
Nov 29 06:41:41 compute-0 sudo[240240]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:41:41 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:41:41 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:41:41 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:41:41.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:41:41 compute-0 sudo[240344]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pqrnssalrbziditpaimfvwjfbkyhrnhc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398501.3827212-2465-187660752615882/AnsiballZ_file.py'
Nov 29 06:41:41 compute-0 sudo[240344]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:41:41 compute-0 python3.9[240354]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:41:41 compute-0 sudo[240344]: pam_unix(sudo:session): session closed for user root
Nov 29 06:41:42 compute-0 podman[240383]: 2025-11-29 06:41:42.004135902 +0000 UTC m=+0.063454544 container create 9b77428928a70727672ee3f1a208e447167b423921dd3ca0edf4349eaeef3b73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_brown, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Nov 29 06:41:42 compute-0 systemd[1]: Started libpod-conmon-9b77428928a70727672ee3f1a208e447167b423921dd3ca0edf4349eaeef3b73.scope.
Nov 29 06:41:42 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:41:42 compute-0 podman[240383]: 2025-11-29 06:41:41.9842512 +0000 UTC m=+0.043569872 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:41:42 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:41:42 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:41:42 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:41:42.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:41:42 compute-0 podman[240383]: 2025-11-29 06:41:42.270131919 +0000 UTC m=+0.329450591 container init 9b77428928a70727672ee3f1a208e447167b423921dd3ca0edf4349eaeef3b73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_brown, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 06:41:42 compute-0 podman[240383]: 2025-11-29 06:41:42.278507076 +0000 UTC m=+0.337825708 container start 9b77428928a70727672ee3f1a208e447167b423921dd3ca0edf4349eaeef3b73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_brown, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 06:41:42 compute-0 infallible_brown[240422]: 167 167
Nov 29 06:41:42 compute-0 systemd[1]: libpod-9b77428928a70727672ee3f1a208e447167b423921dd3ca0edf4349eaeef3b73.scope: Deactivated successfully.
Nov 29 06:41:42 compute-0 podman[240383]: 2025-11-29 06:41:42.2935151 +0000 UTC m=+0.352833822 container attach 9b77428928a70727672ee3f1a208e447167b423921dd3ca0edf4349eaeef3b73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_brown, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 06:41:42 compute-0 podman[240383]: 2025-11-29 06:41:42.294242351 +0000 UTC m=+0.353561003 container died 9b77428928a70727672ee3f1a208e447167b423921dd3ca0edf4349eaeef3b73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_brown, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 29 06:41:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-81d6970558bb1a1bad9687bc56ac35226c665cdca10a633aa22a4ed217c10d43-merged.mount: Deactivated successfully.
Nov 29 06:41:42 compute-0 podman[240383]: 2025-11-29 06:41:42.393575298 +0000 UTC m=+0.452893930 container remove 9b77428928a70727672ee3f1a208e447167b423921dd3ca0edf4349eaeef3b73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_brown, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 29 06:41:42 compute-0 systemd[1]: libpod-conmon-9b77428928a70727672ee3f1a208e447167b423921dd3ca0edf4349eaeef3b73.scope: Deactivated successfully.
Nov 29 06:41:42 compute-0 sudo[240569]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tnoqtkpzchzmalpwklggououopzskjcv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398502.1317453-2465-70451494820647/AnsiballZ_file.py'
Nov 29 06:41:42 compute-0 sudo[240569]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:41:42 compute-0 podman[240577]: 2025-11-29 06:41:42.587366874 +0000 UTC m=+0.026618593 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:41:42 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v878: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:41:42 compute-0 python3.9[240571]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:41:42 compute-0 podman[240577]: 2025-11-29 06:41:42.703168427 +0000 UTC m=+0.142420126 container create fbc417902ca205cfd0802b7080cb9c03a4283ac6889c9209d3ea1a6494f23a16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_cerf, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 29 06:41:42 compute-0 sudo[240569]: pam_unix(sudo:session): session closed for user root
Nov 29 06:41:42 compute-0 systemd[1]: Started libpod-conmon-fbc417902ca205cfd0802b7080cb9c03a4283ac6889c9209d3ea1a6494f23a16.scope.
Nov 29 06:41:42 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:41:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08c72eb701116f367d89bd6422c20bcab8b7fbf02d6b2c661142a8658e8bbc6a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 06:41:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08c72eb701116f367d89bd6422c20bcab8b7fbf02d6b2c661142a8658e8bbc6a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:41:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08c72eb701116f367d89bd6422c20bcab8b7fbf02d6b2c661142a8658e8bbc6a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:41:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08c72eb701116f367d89bd6422c20bcab8b7fbf02d6b2c661142a8658e8bbc6a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 06:41:42 compute-0 podman[240577]: 2025-11-29 06:41:42.913488691 +0000 UTC m=+0.352740410 container init fbc417902ca205cfd0802b7080cb9c03a4283ac6889c9209d3ea1a6494f23a16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_cerf, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 29 06:41:42 compute-0 podman[240577]: 2025-11-29 06:41:42.921059575 +0000 UTC m=+0.360311284 container start fbc417902ca205cfd0802b7080cb9c03a4283ac6889c9209d3ea1a6494f23a16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_cerf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 29 06:41:43 compute-0 podman[240577]: 2025-11-29 06:41:43.014086264 +0000 UTC m=+0.453337973 container attach fbc417902ca205cfd0802b7080cb9c03a4283ac6889c9209d3ea1a6494f23a16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_cerf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS)
Nov 29 06:41:43 compute-0 sudo[240749]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-knuphxdylguzyvuxddjekraizycgrxdt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398503.056455-2639-43710663940890/AnsiballZ_command.py'
Nov 29 06:41:43 compute-0 sudo[240749]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:41:43 compute-0 python3.9[240751]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:41:43 compute-0 sudo[240749]: pam_unix(sudo:session): session closed for user root
Nov 29 06:41:43 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:41:43 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:41:43 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:41:43.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:41:43 compute-0 flamboyant_cerf[240618]: {
Nov 29 06:41:43 compute-0 flamboyant_cerf[240618]:     "1": [
Nov 29 06:41:43 compute-0 flamboyant_cerf[240618]:         {
Nov 29 06:41:43 compute-0 flamboyant_cerf[240618]:             "devices": [
Nov 29 06:41:43 compute-0 flamboyant_cerf[240618]:                 "/dev/loop3"
Nov 29 06:41:43 compute-0 flamboyant_cerf[240618]:             ],
Nov 29 06:41:43 compute-0 flamboyant_cerf[240618]:             "lv_name": "ceph_lv0",
Nov 29 06:41:43 compute-0 flamboyant_cerf[240618]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 06:41:43 compute-0 flamboyant_cerf[240618]:             "lv_size": "7511998464",
Nov 29 06:41:43 compute-0 flamboyant_cerf[240618]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=336ec58c-893b-528f-a0c1-6ed1196bc047,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=91f280f1-e534-4adc-bf70-98711580c2dd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 06:41:43 compute-0 flamboyant_cerf[240618]:             "lv_uuid": "G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP",
Nov 29 06:41:43 compute-0 flamboyant_cerf[240618]:             "name": "ceph_lv0",
Nov 29 06:41:43 compute-0 flamboyant_cerf[240618]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 06:41:43 compute-0 flamboyant_cerf[240618]:             "tags": {
Nov 29 06:41:43 compute-0 flamboyant_cerf[240618]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 06:41:43 compute-0 flamboyant_cerf[240618]:                 "ceph.block_uuid": "G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP",
Nov 29 06:41:43 compute-0 flamboyant_cerf[240618]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 06:41:43 compute-0 flamboyant_cerf[240618]:                 "ceph.cluster_fsid": "336ec58c-893b-528f-a0c1-6ed1196bc047",
Nov 29 06:41:43 compute-0 flamboyant_cerf[240618]:                 "ceph.cluster_name": "ceph",
Nov 29 06:41:43 compute-0 flamboyant_cerf[240618]:                 "ceph.crush_device_class": "",
Nov 29 06:41:43 compute-0 flamboyant_cerf[240618]:                 "ceph.encrypted": "0",
Nov 29 06:41:43 compute-0 flamboyant_cerf[240618]:                 "ceph.osd_fsid": "91f280f1-e534-4adc-bf70-98711580c2dd",
Nov 29 06:41:43 compute-0 flamboyant_cerf[240618]:                 "ceph.osd_id": "1",
Nov 29 06:41:43 compute-0 flamboyant_cerf[240618]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 06:41:43 compute-0 flamboyant_cerf[240618]:                 "ceph.type": "block",
Nov 29 06:41:43 compute-0 flamboyant_cerf[240618]:                 "ceph.vdo": "0"
Nov 29 06:41:43 compute-0 flamboyant_cerf[240618]:             },
Nov 29 06:41:43 compute-0 flamboyant_cerf[240618]:             "type": "block",
Nov 29 06:41:43 compute-0 flamboyant_cerf[240618]:             "vg_name": "ceph_vg0"
Nov 29 06:41:43 compute-0 flamboyant_cerf[240618]:         }
Nov 29 06:41:43 compute-0 flamboyant_cerf[240618]:     ]
Nov 29 06:41:43 compute-0 flamboyant_cerf[240618]: }
Nov 29 06:41:43 compute-0 systemd[1]: libpod-fbc417902ca205cfd0802b7080cb9c03a4283ac6889c9209d3ea1a6494f23a16.scope: Deactivated successfully.
Nov 29 06:41:43 compute-0 podman[240577]: 2025-11-29 06:41:43.697375654 +0000 UTC m=+1.136627383 container died fbc417902ca205cfd0802b7080cb9c03a4283ac6889c9209d3ea1a6494f23a16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_cerf, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 06:41:43 compute-0 ceph-mon[74654]: pgmap v878: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:41:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-08c72eb701116f367d89bd6422c20bcab8b7fbf02d6b2c661142a8658e8bbc6a-merged.mount: Deactivated successfully.
Nov 29 06:41:43 compute-0 podman[240577]: 2025-11-29 06:41:43.984524108 +0000 UTC m=+1.423775817 container remove fbc417902ca205cfd0802b7080cb9c03a4283ac6889c9209d3ea1a6494f23a16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_cerf, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 29 06:41:44 compute-0 sudo[240240]: pam_unix(sudo:session): session closed for user root
Nov 29 06:41:44 compute-0 systemd[1]: libpod-conmon-fbc417902ca205cfd0802b7080cb9c03a4283ac6889c9209d3ea1a6494f23a16.scope: Deactivated successfully.
Nov 29 06:41:44 compute-0 sudo[240845]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:41:44 compute-0 sudo[240845]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:41:44 compute-0 sudo[240845]: pam_unix(sudo:session): session closed for user root
Nov 29 06:41:44 compute-0 sudo[240893]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:41:44 compute-0 sudo[240893]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:41:44 compute-0 sudo[240893]: pam_unix(sudo:session): session closed for user root
Nov 29 06:41:44 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:41:44 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:41:44 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:41:44.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:41:44 compute-0 sudo[240942]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:41:44 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:41:44 compute-0 sudo[240942]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:41:44 compute-0 sudo[240942]: pam_unix(sudo:session): session closed for user root
Nov 29 06:41:44 compute-0 sudo[240994]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -- raw list --format json
Nov 29 06:41:44 compute-0 sudo[240994]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:41:44 compute-0 python3.9[240992]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 29 06:41:44 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v879: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:41:44 compute-0 podman[241084]: 2025-11-29 06:41:44.705622777 +0000 UTC m=+0.045985351 container create c67b8487f73012c48de75fc94c42063abd49bedaa43f33442e4b096f0b141063 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_galois, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 29 06:41:44 compute-0 systemd[1]: Started libpod-conmon-c67b8487f73012c48de75fc94c42063abd49bedaa43f33442e4b096f0b141063.scope.
Nov 29 06:41:44 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:41:44 compute-0 podman[241084]: 2025-11-29 06:41:44.68660849 +0000 UTC m=+0.026971114 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:41:44 compute-0 podman[241084]: 2025-11-29 06:41:44.782377746 +0000 UTC m=+0.122740340 container init c67b8487f73012c48de75fc94c42063abd49bedaa43f33442e4b096f0b141063 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_galois, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 29 06:41:44 compute-0 podman[241084]: 2025-11-29 06:41:44.789287581 +0000 UTC m=+0.129650155 container start c67b8487f73012c48de75fc94c42063abd49bedaa43f33442e4b096f0b141063 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_galois, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 29 06:41:44 compute-0 podman[241084]: 2025-11-29 06:41:44.792475001 +0000 UTC m=+0.132837605 container attach c67b8487f73012c48de75fc94c42063abd49bedaa43f33442e4b096f0b141063 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_galois, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 06:41:44 compute-0 dreamy_galois[241100]: 167 167
Nov 29 06:41:44 compute-0 systemd[1]: libpod-c67b8487f73012c48de75fc94c42063abd49bedaa43f33442e4b096f0b141063.scope: Deactivated successfully.
Nov 29 06:41:44 compute-0 podman[241084]: 2025-11-29 06:41:44.793346696 +0000 UTC m=+0.133709270 container died c67b8487f73012c48de75fc94c42063abd49bedaa43f33442e4b096f0b141063 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_galois, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 29 06:41:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-f283f6101a688c969a5065df13aeb537ef0960d50dd5566b3c59d2305a52806e-merged.mount: Deactivated successfully.
Nov 29 06:41:44 compute-0 podman[241084]: 2025-11-29 06:41:44.826143893 +0000 UTC m=+0.166506467 container remove c67b8487f73012c48de75fc94c42063abd49bedaa43f33442e4b096f0b141063 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_galois, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 29 06:41:44 compute-0 systemd[1]: libpod-conmon-c67b8487f73012c48de75fc94c42063abd49bedaa43f33442e4b096f0b141063.scope: Deactivated successfully.
Nov 29 06:41:45 compute-0 podman[241159]: 2025-11-29 06:41:45.001002653 +0000 UTC m=+0.053011009 container create b49f0d72632778e894e6b561b7c0808eca97bee94eb55aa1e36c363cdb1a48b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_borg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 06:41:45 compute-0 podman[241159]: 2025-11-29 06:41:44.972002424 +0000 UTC m=+0.024010800 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:41:45 compute-0 systemd[1]: Started libpod-conmon-b49f0d72632778e894e6b561b7c0808eca97bee94eb55aa1e36c363cdb1a48b2.scope.
Nov 29 06:41:45 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:41:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a8d0fca5322462b05c1a35e92475119852950b214b0051bd6eb548cdc9b1f25/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 06:41:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a8d0fca5322462b05c1a35e92475119852950b214b0051bd6eb548cdc9b1f25/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:41:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a8d0fca5322462b05c1a35e92475119852950b214b0051bd6eb548cdc9b1f25/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:41:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a8d0fca5322462b05c1a35e92475119852950b214b0051bd6eb548cdc9b1f25/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 06:41:45 compute-0 podman[241159]: 2025-11-29 06:41:45.248559449 +0000 UTC m=+0.300567855 container init b49f0d72632778e894e6b561b7c0808eca97bee94eb55aa1e36c363cdb1a48b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_borg, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 29 06:41:45 compute-0 podman[241159]: 2025-11-29 06:41:45.264538501 +0000 UTC m=+0.316546867 container start b49f0d72632778e894e6b561b7c0808eca97bee94eb55aa1e36c363cdb1a48b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_borg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 29 06:41:45 compute-0 podman[241159]: 2025-11-29 06:41:45.271449396 +0000 UTC m=+0.323457772 container attach b49f0d72632778e894e6b561b7c0808eca97bee94eb55aa1e36c363cdb1a48b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_borg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 06:41:45 compute-0 sudo[241270]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vnphiilatsefwltviownzziogustntsl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398504.9035723-2693-29651132077399/AnsiballZ_systemd_service.py'
Nov 29 06:41:45 compute-0 sudo[241270]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:41:45 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:41:45 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:41:45 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:41:45.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:41:45 compute-0 python3.9[241272]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 29 06:41:45 compute-0 systemd[1]: Reloading.
Nov 29 06:41:45 compute-0 systemd-sysv-generator[241302]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 06:41:45 compute-0 systemd-rc-local-generator[241299]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 06:41:45 compute-0 ceph-mon[74654]: pgmap v879: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:41:46 compute-0 sudo[241270]: pam_unix(sudo:session): session closed for user root
Nov 29 06:41:46 compute-0 upbeat_borg[241239]: {
Nov 29 06:41:46 compute-0 upbeat_borg[241239]:     "91f280f1-e534-4adc-bf70-98711580c2dd": {
Nov 29 06:41:46 compute-0 upbeat_borg[241239]:         "ceph_fsid": "336ec58c-893b-528f-a0c1-6ed1196bc047",
Nov 29 06:41:46 compute-0 upbeat_borg[241239]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 06:41:46 compute-0 upbeat_borg[241239]:         "osd_id": 1,
Nov 29 06:41:46 compute-0 upbeat_borg[241239]:         "osd_uuid": "91f280f1-e534-4adc-bf70-98711580c2dd",
Nov 29 06:41:46 compute-0 upbeat_borg[241239]:         "type": "bluestore"
Nov 29 06:41:46 compute-0 upbeat_borg[241239]:     }
Nov 29 06:41:46 compute-0 upbeat_borg[241239]: }
Nov 29 06:41:46 compute-0 systemd[1]: libpod-b49f0d72632778e894e6b561b7c0808eca97bee94eb55aa1e36c363cdb1a48b2.scope: Deactivated successfully.
Nov 29 06:41:46 compute-0 podman[241159]: 2025-11-29 06:41:46.173758056 +0000 UTC m=+1.225766412 container died b49f0d72632778e894e6b561b7c0808eca97bee94eb55aa1e36c363cdb1a48b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_borg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 29 06:41:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-5a8d0fca5322462b05c1a35e92475119852950b214b0051bd6eb548cdc9b1f25-merged.mount: Deactivated successfully.
Nov 29 06:41:46 compute-0 podman[241159]: 2025-11-29 06:41:46.238616799 +0000 UTC m=+1.290625135 container remove b49f0d72632778e894e6b561b7c0808eca97bee94eb55aa1e36c363cdb1a48b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_borg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 29 06:41:46 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:41:46 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:41:46 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:41:46.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:41:46 compute-0 systemd[1]: libpod-conmon-b49f0d72632778e894e6b561b7c0808eca97bee94eb55aa1e36c363cdb1a48b2.scope: Deactivated successfully.
Nov 29 06:41:46 compute-0 sudo[240994]: pam_unix(sudo:session): session closed for user root
Nov 29 06:41:46 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 06:41:46 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:41:46 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 06:41:46 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:41:46 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev 7a41ce88-8c4b-4ec2-8f70-b0d368995dce does not exist
Nov 29 06:41:46 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev 8ccc1637-f7ad-4480-a750-a02bc58330e4 does not exist
Nov 29 06:41:46 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev 28d2e9dd-fba5-441f-af64-a7328a4523da does not exist
Nov 29 06:41:46 compute-0 sudo[241508]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ceyevufghkqcqxmimjkalmdyolmzonnn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398506.2121542-2717-87095805407442/AnsiballZ_command.py'
Nov 29 06:41:46 compute-0 sudo[241508]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:41:46 compute-0 sudo[241470]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:41:46 compute-0 sudo[241470]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:41:46 compute-0 sudo[241470]: pam_unix(sudo:session): session closed for user root
Nov 29 06:41:46 compute-0 sudo[241515]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 06:41:46 compute-0 sudo[241515]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:41:46 compute-0 sudo[241515]: pam_unix(sudo:session): session closed for user root
Nov 29 06:41:46 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v880: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:41:46 compute-0 python3.9[241512]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:41:47 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:41:47 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:41:47 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:41:47.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:41:47 compute-0 sudo[241508]: pam_unix(sudo:session): session closed for user root
Nov 29 06:41:47 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:41:47 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:41:47 compute-0 ceph-mon[74654]: pgmap v880: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:41:48 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:41:48 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:41:48 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:41:48.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:41:48 compute-0 sudo[241691]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gfjhrozozhexadojsomebflxgzclmadm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398507.9647632-2717-156097549986751/AnsiballZ_command.py'
Nov 29 06:41:48 compute-0 sudo[241691]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:41:48 compute-0 python3.9[241693]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:41:48 compute-0 sudo[241691]: pam_unix(sudo:session): session closed for user root
Nov 29 06:41:48 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v881: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:41:48 compute-0 sudo[241845]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nldosyugtykrtleeukmmwvtkehnkuaul ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398508.711925-2717-244979513212039/AnsiballZ_command.py'
Nov 29 06:41:48 compute-0 sudo[241845]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:41:49 compute-0 python3.9[241847]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:41:49 compute-0 sudo[241845]: pam_unix(sudo:session): session closed for user root
Nov 29 06:41:49 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:41:49 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:41:49 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:41:49 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:41:49.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:41:49 compute-0 sudo[242002]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ombtfzlltawopvlmkhwdzmnvqwlzvoxo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398509.340633-2717-261601484031004/AnsiballZ_command.py'
Nov 29 06:41:49 compute-0 sudo[242002]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:41:49 compute-0 python3.9[242004]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:41:49 compute-0 sudo[242002]: pam_unix(sudo:session): session closed for user root
Nov 29 06:41:50 compute-0 ceph-mon[74654]: pgmap v881: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:41:50 compute-0 sshd-session[241919]: Received disconnect from 193.46.255.103 port 46134:11:  [preauth]
Nov 29 06:41:50 compute-0 sshd-session[241919]: Disconnected from authenticating user root 193.46.255.103 port 46134 [preauth]
Nov 29 06:41:50 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:41:50 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:41:50 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:41:50.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:41:50 compute-0 sudo[242155]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bxdatyqbijrytyjkrrcvcqnztmgsrxxt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398510.0872412-2717-259426357950233/AnsiballZ_command.py'
Nov 29 06:41:50 compute-0 sudo[242155]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:41:50 compute-0 python3.9[242157]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:41:50 compute-0 sudo[242155]: pam_unix(sudo:session): session closed for user root
Nov 29 06:41:50 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v882: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:41:50 compute-0 sudo[242211]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:41:50 compute-0 sudo[242211]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:41:50 compute-0 sudo[242211]: pam_unix(sudo:session): session closed for user root
Nov 29 06:41:50 compute-0 sudo[242261]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:41:50 compute-0 sudo[242261]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:41:50 compute-0 sudo[242261]: pam_unix(sudo:session): session closed for user root
Nov 29 06:41:51 compute-0 sudo[242359]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-waqpylzcqyudqwkpvydmaaxsauftqqgy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398510.7953067-2717-43493990874786/AnsiballZ_command.py'
Nov 29 06:41:51 compute-0 sudo[242359]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:41:51 compute-0 python3.9[242361]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:41:51 compute-0 sudo[242359]: pam_unix(sudo:session): session closed for user root
Nov 29 06:41:51 compute-0 ceph-mon[74654]: pgmap v882: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:41:51 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:41:51 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:41:51 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:41:51.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:41:51 compute-0 sshd-session[241950]: Invalid user cumulus from 27.112.78.245 port 45526
Nov 29 06:41:51 compute-0 sudo[242512]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-egjhoggmlpxjtemcuxkztfybnepihddv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398511.4650073-2717-51803260337616/AnsiballZ_command.py'
Nov 29 06:41:51 compute-0 sudo[242512]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:41:51 compute-0 sshd-session[241950]: Received disconnect from 27.112.78.245 port 45526:11: Bye Bye [preauth]
Nov 29 06:41:51 compute-0 sshd-session[241950]: Disconnected from invalid user cumulus 27.112.78.245 port 45526 [preauth]
Nov 29 06:41:51 compute-0 python3.9[242514]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:41:51 compute-0 sudo[242512]: pam_unix(sudo:session): session closed for user root
Nov 29 06:41:52 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:41:52 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:41:52 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:41:52.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:41:52 compute-0 sudo[242665]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rtjtjanjpisnuyyvaxmbkwwcovzhhvyn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398512.1261296-2717-137469866142621/AnsiballZ_command.py'
Nov 29 06:41:52 compute-0 sudo[242665]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:41:52 compute-0 python3.9[242667]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 06:41:52 compute-0 sudo[242665]: pam_unix(sudo:session): session closed for user root
Nov 29 06:41:52 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v883: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:41:53 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:41:53 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:41:53 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:41:53.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:41:53 compute-0 ceph-mon[74654]: pgmap v883: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:41:54 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:41:54 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:41:54 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:41:54.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:41:54 compute-0 ceph-mgr[74948]: [balancer INFO root] Optimize plan auto_2025-11-29_06:41:54
Nov 29 06:41:54 compute-0 ceph-mgr[74948]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 06:41:54 compute-0 ceph-mgr[74948]: [balancer INFO root] do_upmap
Nov 29 06:41:54 compute-0 ceph-mgr[74948]: [balancer INFO root] pools ['backups', 'default.rgw.control', 'cephfs.cephfs.meta', 'default.rgw.meta', 'volumes', 'cephfs.cephfs.data', 'default.rgw.log', '.mgr', '.rgw.root', 'images', 'vms']
Nov 29 06:41:54 compute-0 ceph-mgr[74948]: [balancer INFO root] prepared 0/10 changes
Nov 29 06:41:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:41:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:41:54 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:41:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:41:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:41:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:41:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:41:54 compute-0 sudo[242819]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bjjzkzyajxhcmmyhbfggvdkgjxdluwxq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398514.1598153-2924-250622927328027/AnsiballZ_file.py'
Nov 29 06:41:54 compute-0 sudo[242819]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:41:54 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v884: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:41:54 compute-0 python3.9[242821]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 06:41:54 compute-0 sudo[242819]: pam_unix(sudo:session): session closed for user root
Nov 29 06:41:55 compute-0 sudo[242972]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ylukaypmjcfhczllnhlqzlmmyjpbzgjv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398514.9327133-2924-99164943543830/AnsiballZ_file.py'
Nov 29 06:41:55 compute-0 sudo[242972]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:41:55 compute-0 python3.9[242974]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 06:41:55 compute-0 sudo[242972]: pam_unix(sudo:session): session closed for user root
Nov 29 06:41:55 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:41:55 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:41:55 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:41:55.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:41:55 compute-0 sudo[243124]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mabcezodzoagrgjmlbupdmnvolbijmgy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398515.5869412-2924-34240372007336/AnsiballZ_file.py'
Nov 29 06:41:55 compute-0 sudo[243124]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:41:56 compute-0 python3.9[243126]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 06:41:56 compute-0 sudo[243124]: pam_unix(sudo:session): session closed for user root
Nov 29 06:41:56 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:41:56 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:41:56 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:41:56.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:41:56 compute-0 ceph-mon[74654]: pgmap v884: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:41:56 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v885: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:41:56 compute-0 sudo[243276]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vmrwahhgpxsizhpzzsbqsctbcuisrqjw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398516.4337223-2990-269214876998396/AnsiballZ_file.py'
Nov 29 06:41:56 compute-0 sudo[243276]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:41:56 compute-0 python3.9[243278]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 06:41:56 compute-0 sudo[243276]: pam_unix(sudo:session): session closed for user root
Nov 29 06:41:57 compute-0 sudo[243429]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uxbsnstjnrbwutuhvekxmbaeosooykmg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398517.1145828-2990-160048356388618/AnsiballZ_file.py'
Nov 29 06:41:57 compute-0 sudo[243429]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:41:57 compute-0 python3.9[243431]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 06:41:57 compute-0 ceph-mon[74654]: pgmap v885: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:41:57 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:41:57 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:41:57 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:41:57.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:41:57 compute-0 sudo[243429]: pam_unix(sudo:session): session closed for user root
Nov 29 06:41:58 compute-0 sudo[243581]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-byyqhuzkdpxdrmjuottzsjstcslsciem ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398517.7585764-2990-36806205856039/AnsiballZ_file.py'
Nov 29 06:41:58 compute-0 sudo[243581]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:41:58 compute-0 python3.9[243583]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 06:41:58 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:41:58 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:41:58 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:41:58.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:41:58 compute-0 sudo[243581]: pam_unix(sudo:session): session closed for user root
Nov 29 06:41:58 compute-0 podman[243584]: 2025-11-29 06:41:58.326642976 +0000 UTC m=+0.060471299 container health_status 843911ed0b6203707f0633a7e737420fbf54d55170a2d9cdc86db1752ff76af8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 29 06:41:58 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v886: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:41:58 compute-0 sudo[243755]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rrzmcfdgxwkkmszihdyrdgjetnkdpels ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398518.4258115-2990-120605953712545/AnsiballZ_file.py'
Nov 29 06:41:58 compute-0 sudo[243755]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:41:58 compute-0 sshd-session[243629]: Invalid user mysql from 162.214.92.14 port 53088
Nov 29 06:41:58 compute-0 sshd-session[243629]: Received disconnect from 162.214.92.14 port 53088:11: Bye Bye [preauth]
Nov 29 06:41:58 compute-0 sshd-session[243629]: Disconnected from invalid user mysql 162.214.92.14 port 53088 [preauth]
Nov 29 06:41:58 compute-0 python3.9[243757]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 06:41:58 compute-0 sudo[243755]: pam_unix(sudo:session): session closed for user root
Nov 29 06:41:59 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:41:59 compute-0 sudo[243908]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-elrjurcotyixyhsldslcvsujeggmlvop ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398519.0693786-2990-202531588758628/AnsiballZ_file.py'
Nov 29 06:41:59 compute-0 sudo[243908]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:41:59 compute-0 python3.9[243910]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 29 06:41:59 compute-0 sudo[243908]: pam_unix(sudo:session): session closed for user root
Nov 29 06:41:59 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:41:59 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:41:59 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:41:59.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:41:59 compute-0 ceph-mon[74654]: pgmap v886: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:41:59 compute-0 sudo[244060]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cuepnlwswdahonrtjgiuxmlwtrclisll ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398519.693766-2990-186130117602929/AnsiballZ_file.py'
Nov 29 06:41:59 compute-0 sudo[244060]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:42:00 compute-0 python3.9[244062]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 29 06:42:00 compute-0 sudo[244060]: pam_unix(sudo:session): session closed for user root
Nov 29 06:42:00 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:42:00 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:42:00 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:42:00.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:42:00 compute-0 sudo[244212]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-unprzlfrfghqcupgndmdlarmktrdmbvx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398520.3975573-2990-198396155508027/AnsiballZ_file.py'
Nov 29 06:42:00 compute-0 sudo[244212]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:42:00 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v887: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:42:00 compute-0 python3.9[244214]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 29 06:42:00 compute-0 sudo[244212]: pam_unix(sudo:session): session closed for user root
Nov 29 06:42:01 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:42:01 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:42:01 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:42:01.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:42:01 compute-0 ceph-mon[74654]: pgmap v887: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:42:02 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:42:02 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:42:02 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:42:02.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:42:02 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v888: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:42:03 compute-0 sshd-session[244242]: Invalid user packer from 103.143.238.173 port 52416
Nov 29 06:42:03 compute-0 sshd-session[244242]: Received disconnect from 103.143.238.173 port 52416:11: Bye Bye [preauth]
Nov 29 06:42:03 compute-0 sshd-session[244242]: Disconnected from invalid user packer 103.143.238.173 port 52416 [preauth]
Nov 29 06:42:03 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:42:03 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:42:03 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:42:03.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:42:03 compute-0 ceph-mon[74654]: pgmap v888: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:42:03 compute-0 sshd-session[244240]: Invalid user test1 from 103.147.159.91 port 54428
Nov 29 06:42:04 compute-0 sshd-session[244240]: Received disconnect from 103.147.159.91 port 54428:11: Bye Bye [preauth]
Nov 29 06:42:04 compute-0 sshd-session[244240]: Disconnected from invalid user test1 103.147.159.91 port 54428 [preauth]
Nov 29 06:42:04 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:42:04 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:42:04 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:42:04.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:42:04 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:42:04 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v889: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:42:05 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:42:05 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:42:05 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:42:05.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:42:05 compute-0 ceph-mon[74654]: pgmap v889: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:42:06 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:42:06 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:42:06 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:42:06.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:42:06 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v890: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:42:07 compute-0 podman[244247]: 2025-11-29 06:42:07.080031866 +0000 UTC m=+0.049039497 container health_status 81ea2bcb89266a0110a379c2083d8cc042460d4a35c8ed3bf349dd1083925000 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 06:42:07 compute-0 podman[244248]: 2025-11-29 06:42:07.114675935 +0000 UTC m=+0.083425378 container health_status b3f42e9a710907b47913576d27471d163da731262c1464357cff24681ce600c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 29 06:42:07 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:42:07 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:42:07 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:42:07.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:42:07 compute-0 ceph-mon[74654]: pgmap v890: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:42:08 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:42:08 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:42:08 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:42:08.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:42:08 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v891: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:42:09 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:42:09 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:42:09 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:42:09 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:42:09.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:42:09 compute-0 sshd-session[244293]: Invalid user frontend from 193.163.72.91 port 38562
Nov 29 06:42:09 compute-0 ceph-mon[74654]: pgmap v891: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:42:10 compute-0 sshd-session[244293]: Received disconnect from 193.163.72.91 port 38562:11: Bye Bye [preauth]
Nov 29 06:42:10 compute-0 sshd-session[244293]: Disconnected from invalid user frontend 193.163.72.91 port 38562 [preauth]
Nov 29 06:42:10 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:42:10 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:42:10 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:42:10.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:42:10 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v892: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:42:11 compute-0 sudo[244296]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:42:11 compute-0 sudo[244296]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:42:11 compute-0 sudo[244296]: pam_unix(sudo:session): session closed for user root
Nov 29 06:42:11 compute-0 sudo[244321]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:42:11 compute-0 sudo[244321]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:42:11 compute-0 sudo[244321]: pam_unix(sudo:session): session closed for user root
Nov 29 06:42:11 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:42:11 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:42:11 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:42:11.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:42:11 compute-0 ceph-mon[74654]: pgmap v892: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:42:12 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:42:12 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:42:12 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:42:12.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:42:12 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v893: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:42:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 06:42:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:42:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 06:42:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:42:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:42:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:42:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:42:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:42:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:42:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:42:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:42:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:42:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 06:42:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:42:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:42:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:42:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Nov 29 06:42:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:42:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 06:42:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:42:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:42:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:42:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 06:42:13 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:42:13 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:42:13 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:42:13.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:42:14 compute-0 ceph-mon[74654]: pgmap v893: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:42:14 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:42:14 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:42:14 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:42:14.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:42:14 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:42:14 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v894: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:42:14 compute-0 sudo[244472]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-effmlvlzjkyfxqtvmmkaqhgqothlmbpk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398534.2921655-3315-253107511027101/AnsiballZ_getent.py'
Nov 29 06:42:14 compute-0 sudo[244472]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:42:14 compute-0 python3.9[244474]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Nov 29 06:42:14 compute-0 sudo[244472]: pam_unix(sudo:session): session closed for user root
Nov 29 06:42:15 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:42:15 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:42:15 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:42:15.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:42:15 compute-0 sudo[244626]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fmxfrojhgvnznvrmgsubbxgectvfhern ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398535.2176871-3339-186727510333866/AnsiballZ_group.py'
Nov 29 06:42:15 compute-0 sudo[244626]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:42:15 compute-0 python3.9[244628]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 29 06:42:15 compute-0 groupadd[244629]: group added to /etc/group: name=nova, GID=42436
Nov 29 06:42:15 compute-0 groupadd[244629]: group added to /etc/gshadow: name=nova
Nov 29 06:42:15 compute-0 groupadd[244629]: new group: name=nova, GID=42436
Nov 29 06:42:15 compute-0 sudo[244626]: pam_unix(sudo:session): session closed for user root
Nov 29 06:42:16 compute-0 ceph-mon[74654]: pgmap v894: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:42:16 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:42:16 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:42:16 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:42:16.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:42:16 compute-0 sudo[244784]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aggujltajvinxundnqxnawglfiydqvrq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398536.1826386-3363-290027089864/AnsiballZ_user.py'
Nov 29 06:42:16 compute-0 sudo[244784]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:42:16 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v895: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:42:16 compute-0 python3.9[244786]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 29 06:42:16 compute-0 useradd[244788]: new user: name=nova, UID=42436, GID=42436, home=/home/nova, shell=/bin/sh, from=/dev/pts/0
Nov 29 06:42:16 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 06:42:16 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 06:42:16 compute-0 useradd[244788]: add 'nova' to group 'libvirt'
Nov 29 06:42:16 compute-0 useradd[244788]: add 'nova' to shadow group 'libvirt'
Nov 29 06:42:16 compute-0 sudo[244784]: pam_unix(sudo:session): session closed for user root
Nov 29 06:42:17 compute-0 ceph-mon[74654]: pgmap v895: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:42:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:42:17.230 157767 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 06:42:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:42:17.231 157767 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 06:42:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:42:17.231 157767 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 06:42:17 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:42:17 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:42:17 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:42:17.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:42:18 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:42:18 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:42:18 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:42:18.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:42:18 compute-0 sshd-session[244821]: Accepted publickey for zuul from 192.168.122.30 port 39962 ssh2: ECDSA SHA256:q0RMlXdalxA6snNWza7TmIndlwLWLLpO+sXhiGKqO/I
Nov 29 06:42:18 compute-0 systemd-logind[797]: New session 51 of user zuul.
Nov 29 06:42:18 compute-0 systemd[1]: Started Session 51 of User zuul.
Nov 29 06:42:18 compute-0 sshd-session[244821]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 06:42:18 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v896: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:42:18 compute-0 sshd-session[244824]: Received disconnect from 192.168.122.30 port 39962:11: disconnected by user
Nov 29 06:42:18 compute-0 sshd-session[244824]: Disconnected from user zuul 192.168.122.30 port 39962
Nov 29 06:42:18 compute-0 sshd-session[244821]: pam_unix(sshd:session): session closed for user zuul
Nov 29 06:42:18 compute-0 systemd[1]: session-51.scope: Deactivated successfully.
Nov 29 06:42:18 compute-0 systemd-logind[797]: Session 51 logged out. Waiting for processes to exit.
Nov 29 06:42:18 compute-0 systemd-logind[797]: Removed session 51.
Nov 29 06:42:19 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:42:19 compute-0 python3.9[244975]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:42:19 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:42:19 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:42:19 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:42:19.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:42:19 compute-0 ceph-mon[74654]: pgmap v896: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:42:20 compute-0 python3.9[245096]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764398539.0246801-3438-244496785808796/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 06:42:20 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:42:20 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:42:20 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:42:20.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:42:20 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v897: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:42:21 compute-0 ceph-mon[74654]: pgmap v897: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:42:21 compute-0 python3.9[245246]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:42:21 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:42:21 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:42:21 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:42:21.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:42:21 compute-0 python3.9[245323]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 06:42:22 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:42:22 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:42:22 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:42:22.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:42:22 compute-0 python3.9[245473]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:42:22 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v898: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:42:22 compute-0 python3.9[245594]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764398541.866962-3438-54032531683931/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 06:42:23 compute-0 python3.9[245745]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:42:23 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:42:23 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:42:23 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:42:23.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:42:23 compute-0 ceph-mon[74654]: pgmap v898: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:42:24 compute-0 python3.9[245866]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764398543.1266162-3438-138614804732523/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 06:42:24 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:42:24 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:42:24 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:42:24.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:42:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:42:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:42:24 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:42:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:42:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:42:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:42:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:42:24 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v899: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:42:24 compute-0 python3.9[246018]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:42:25 compute-0 sshd-session[245867]: Received disconnect from 197.13.24.157 port 35870:11: Bye Bye [preauth]
Nov 29 06:42:25 compute-0 sshd-session[245867]: Disconnected from authenticating user root 197.13.24.157 port 35870 [preauth]
Nov 29 06:42:25 compute-0 python3.9[246140]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764398544.3998175-3438-107750716453536/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 06:42:25 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:42:25 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:42:25 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:42:25.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:42:26 compute-0 ceph-mon[74654]: pgmap v899: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:42:26 compute-0 python3.9[246290]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:42:26 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:42:26 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:42:26 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:42:26.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:42:26 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v900: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:42:26 compute-0 python3.9[246411]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764398545.6622844-3438-69367847712193/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 06:42:27 compute-0 sudo[246564]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cmcnlcpbewjtivavjktikcekcnugwmdb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398547.0538175-3687-152489966672838/AnsiballZ_file.py'
Nov 29 06:42:27 compute-0 sudo[246564]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:42:27 compute-0 python3.9[246566]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:42:27 compute-0 sudo[246564]: pam_unix(sudo:session): session closed for user root
Nov 29 06:42:27 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:42:27 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:42:27 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:42:27.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:42:28 compute-0 ceph-mon[74654]: pgmap v900: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:42:28 compute-0 sudo[246718]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ehoaqcckesvjzffylgxnawxacpkvsjca ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398547.8522303-3711-142514917762502/AnsiballZ_copy.py'
Nov 29 06:42:28 compute-0 sudo[246718]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:42:28 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:42:28 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:42:28 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:42:28.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:42:28 compute-0 python3.9[246720]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:42:28 compute-0 sudo[246718]: pam_unix(sudo:session): session closed for user root
Nov 29 06:42:28 compute-0 podman[246721]: 2025-11-29 06:42:28.54971379 +0000 UTC m=+0.086954649 container health_status 843911ed0b6203707f0633a7e737420fbf54d55170a2d9cdc86db1752ff76af8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Nov 29 06:42:28 compute-0 sshd-session[246512]: Invalid user admin123 from 103.63.25.115 port 35224
Nov 29 06:42:28 compute-0 sshd-session[246591]: Received disconnect from 176.109.67.96 port 51852:11: Bye Bye [preauth]
Nov 29 06:42:28 compute-0 sshd-session[246591]: Disconnected from authenticating user root 176.109.67.96 port 51852 [preauth]
Nov 29 06:42:28 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v901: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:42:28 compute-0 sshd-session[246512]: Received disconnect from 103.63.25.115 port 35224:11: Bye Bye [preauth]
Nov 29 06:42:28 compute-0 sshd-session[246512]: Disconnected from invalid user admin123 103.63.25.115 port 35224 [preauth]
Nov 29 06:42:29 compute-0 sudo[246890]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gxchbvopxxrlmkbwzilrwafpitmznafa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398548.6413188-3735-56424127314215/AnsiballZ_stat.py'
Nov 29 06:42:29 compute-0 sudo[246890]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:42:29 compute-0 python3.9[246892]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 06:42:29 compute-0 sudo[246890]: pam_unix(sudo:session): session closed for user root
Nov 29 06:42:29 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:42:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 06:42:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 06:42:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 06:42:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 06:42:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 06:42:29 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:42:29 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:42:29 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:42:29.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:42:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 06:42:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 06:42:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 06:42:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 06:42:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 06:42:29 compute-0 ceph-mon[74654]: pgmap v901: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:42:29 compute-0 sudo[247042]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhcnynnieculjrttmyrwjjuxjjllqgso ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398549.4820871-3759-154584658180860/AnsiballZ_stat.py'
Nov 29 06:42:29 compute-0 sudo[247042]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:42:30 compute-0 python3.9[247044]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:42:30 compute-0 sudo[247042]: pam_unix(sudo:session): session closed for user root
Nov 29 06:42:30 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:42:30 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:42:30 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:42:30.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:42:30 compute-0 sudo[247165]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dbvhmuxywkbbztectyxpnnupdowgsmxm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398549.4820871-3759-154584658180860/AnsiballZ_copy.py'
Nov 29 06:42:30 compute-0 sudo[247165]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:42:30 compute-0 python3.9[247167]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1764398549.4820871-3759-154584658180860/.source _original_basename=.5rxm8zm2 follow=False checksum=8dc8cde5f9871ff2228372ba7c6e010a4bfe6deb backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Nov 29 06:42:30 compute-0 sudo[247165]: pam_unix(sudo:session): session closed for user root
Nov 29 06:42:30 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v902: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:42:31 compute-0 sudo[247233]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:42:31 compute-0 sudo[247233]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:42:31 compute-0 sudo[247233]: pam_unix(sudo:session): session closed for user root
Nov 29 06:42:31 compute-0 sudo[247284]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:42:31 compute-0 sudo[247284]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:42:31 compute-0 sudo[247284]: pam_unix(sudo:session): session closed for user root
Nov 29 06:42:31 compute-0 python3.9[247370]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 06:42:31 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:42:31 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:42:31 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:42:31.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:42:32 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:42:32 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:42:32 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:42:32.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:42:32 compute-0 ceph-mon[74654]: pgmap v902: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:42:32 compute-0 python3.9[247522]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:42:32 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v903: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:42:32 compute-0 python3.9[247643]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764398551.8392327-3837-262324410159520/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=211ffd0bca4b407eb4de45a749ef70116a7806fd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 06:42:33 compute-0 ceph-mon[74654]: pgmap v903: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:42:33 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:42:33 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:42:33 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:42:33.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:42:33 compute-0 python3.9[247794]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 06:42:34 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:42:34 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:42:34 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:42:34.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:42:34 compute-0 python3.9[247915]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764398553.2478092-3882-210242173309905/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=60b024e6db49dc6e700fc0d50263944d98d4c034 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 06:42:34 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:42:34 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v904: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:42:35 compute-0 sudo[248066]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-owdgswkxkvplafdpzngkwcdgdzolecli ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398554.8088865-3933-219879876621878/AnsiballZ_container_config_data.py'
Nov 29 06:42:35 compute-0 sudo[248066]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:42:35 compute-0 python3.9[248068]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Nov 29 06:42:35 compute-0 sudo[248066]: pam_unix(sudo:session): session closed for user root
Nov 29 06:42:35 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:42:35 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:42:35 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:42:35.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:42:35 compute-0 ceph-mon[74654]: pgmap v904: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:42:36 compute-0 sudo[248218]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ifacbehajluuuhmkvhbspgmnauxteagl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398555.7433424-3960-88810510873631/AnsiballZ_container_config_hash.py'
Nov 29 06:42:36 compute-0 sudo[248218]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:42:36 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:42:36 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:42:36 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:42:36.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:42:36 compute-0 python3.9[248220]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 29 06:42:36 compute-0 sudo[248218]: pam_unix(sudo:session): session closed for user root
Nov 29 06:42:36 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v905: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:42:37 compute-0 sudo[248394]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mojomrngvompxoiqbpwnywjuaaojmocz ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764398556.8269696-3990-50358330274403/AnsiballZ_edpm_container_manage.py'
Nov 29 06:42:37 compute-0 sudo[248394]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:42:37 compute-0 podman[248345]: 2025-11-29 06:42:37.234235744 +0000 UTC m=+0.075084803 container health_status 81ea2bcb89266a0110a379c2083d8cc042460d4a35c8ed3bf349dd1083925000 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent)
Nov 29 06:42:37 compute-0 podman[248347]: 2025-11-29 06:42:37.271048495 +0000 UTC m=+0.105746870 container health_status b3f42e9a710907b47913576d27471d163da731262c1464357cff24681ce600c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Nov 29 06:42:37 compute-0 python3[248407]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False
Nov 29 06:42:37 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:42:37 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:42:37 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:42:37.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:42:37 compute-0 ceph-mon[74654]: pgmap v905: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:42:38 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:42:38 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:42:38 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:42:38.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:42:38 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v906: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:42:39 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:42:39 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:42:39 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:42:39 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:42:39.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:42:40 compute-0 ceph-mon[74654]: pgmap v906: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:42:40 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:42:40 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:42:40 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:42:40.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:42:40 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v907: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:42:41 compute-0 ceph-mon[74654]: pgmap v907: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:42:41 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:42:41 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:42:41 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:42:41.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:42:42 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:42:42 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:42:42 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:42:42.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:42:42 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v908: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:42:43 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:42:43 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:42:43 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:42:43.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:42:44 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:42:44 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:42:44 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:42:44.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:42:44 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:42:44 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v909: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:42:45 compute-0 sshd-session[248494]: Invalid user ftptest from 34.92.81.41 port 40084
Nov 29 06:42:45 compute-0 sshd-session[248494]: Received disconnect from 34.92.81.41 port 40084:11: Bye Bye [preauth]
Nov 29 06:42:45 compute-0 sshd-session[248494]: Disconnected from invalid user ftptest 34.92.81.41 port 40084 [preauth]
Nov 29 06:42:45 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:42:45 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:42:45 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:42:45.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:42:46 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:42:46 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:42:46 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:42:46.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:42:46 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v910: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:42:47 compute-0 sudo[248500]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:42:47 compute-0 sudo[248500]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:42:47 compute-0 sudo[248500]: pam_unix(sudo:session): session closed for user root
Nov 29 06:42:47 compute-0 sudo[248526]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:42:47 compute-0 sudo[248526]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:42:47 compute-0 sudo[248526]: pam_unix(sudo:session): session closed for user root
Nov 29 06:42:47 compute-0 sudo[248551]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:42:47 compute-0 sudo[248551]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:42:47 compute-0 sudo[248551]: pam_unix(sudo:session): session closed for user root
Nov 29 06:42:47 compute-0 sudo[248576]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 06:42:47 compute-0 sudo[248576]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:42:47 compute-0 ceph-mon[74654]: pgmap v908: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:42:47 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:42:47 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:42:47 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:42:47.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:42:48 compute-0 podman[248434]: 2025-11-29 06:42:48.002029703 +0000 UTC m=+10.431751183 image pull b65793e7266422f5b94c32d109b906c8ffd974cf2ddf0b6929e463e29e05864a quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Nov 29 06:42:48 compute-0 podman[248638]: 2025-11-29 06:42:48.16825178 +0000 UTC m=+0.066300224 container create ab476ee339f2a8c5fbac787c0045404c7acedcfbdff6a82cef58a23ba6e42f8b (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, container_name=nova_compute_init, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 06:42:48 compute-0 podman[248638]: 2025-11-29 06:42:48.129223747 +0000 UTC m=+0.027272291 image pull b65793e7266422f5b94c32d109b906c8ffd974cf2ddf0b6929e463e29e05864a quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Nov 29 06:42:48 compute-0 python3[248407]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Nov 29 06:42:48 compute-0 sudo[248576]: pam_unix(sudo:session): session closed for user root
Nov 29 06:42:48 compute-0 sudo[248394]: pam_unix(sudo:session): session closed for user root
Nov 29 06:42:48 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:42:48 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:42:48 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:42:48.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:42:48 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v911: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:42:49 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:42:49 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 06:42:49 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:42:49 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 06:42:49 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 06:42:49 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 06:42:49 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:42:49 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:42:49 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:42:49.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:42:49 compute-0 ceph-mon[74654]: pgmap v909: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:42:49 compute-0 ceph-mon[74654]: pgmap v910: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:42:50 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:42:50 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:42:50 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:42:50.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:42:50 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v912: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:42:50 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:42:50 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev a0d2ae4c-55a5-464b-a299-be500cfefe84 does not exist
Nov 29 06:42:50 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev 14d7e38b-c356-4bc7-9820-7165261f234d does not exist
Nov 29 06:42:50 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev f887f232-98b1-433a-b3c5-8e89cd87998a does not exist
Nov 29 06:42:51 compute-0 sudo[248840]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wcridxrhldkkelwykqsouyrrofxoomhn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398570.7898471-4014-263729893832381/AnsiballZ_stat.py'
Nov 29 06:42:51 compute-0 sudo[248840]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:42:51 compute-0 sudo[248843]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:42:51 compute-0 sudo[248843]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:42:51 compute-0 sudo[248843]: pam_unix(sudo:session): session closed for user root
Nov 29 06:42:51 compute-0 python3.9[248842]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 06:42:51 compute-0 sudo[248840]: pam_unix(sudo:session): session closed for user root
Nov 29 06:42:51 compute-0 sudo[248869]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:42:51 compute-0 sudo[248869]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:42:51 compute-0 sudo[248869]: pam_unix(sudo:session): session closed for user root
Nov 29 06:42:51 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:42:51 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:42:51 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:42:51.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:42:52 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:42:52 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:42:52 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:42:52.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:42:52 compute-0 sudo[249044]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dotuypwtnnqplvpezdfqmwaabjkbvlvv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398572.2184753-4050-71707058412022/AnsiballZ_container_config_data.py'
Nov 29 06:42:52 compute-0 sudo[249044]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:42:52 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v913: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:42:52 compute-0 python3.9[249046]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Nov 29 06:42:52 compute-0 sudo[249044]: pam_unix(sudo:session): session closed for user root
Nov 29 06:42:53 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 06:42:53 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 06:42:53 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 06:42:53 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 06:42:53 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:42:53 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:42:53 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:42:53.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:42:53 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 06:42:53 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:42:53 compute-0 sudo[249167]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:42:53 compute-0 sudo[249167]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:42:53 compute-0 sudo[249167]: pam_unix(sudo:session): session closed for user root
Nov 29 06:42:53 compute-0 sudo[249226]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ulfpfjxjcesaiyffrrxuanjxaiyabgwu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398573.231929-4077-64384031976274/AnsiballZ_container_config_hash.py'
Nov 29 06:42:53 compute-0 sudo[249226]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:42:53 compute-0 sudo[249224]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:42:53 compute-0 sudo[249224]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:42:53 compute-0 sudo[249224]: pam_unix(sudo:session): session closed for user root
Nov 29 06:42:53 compute-0 sudo[249252]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:42:53 compute-0 sudo[249252]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:42:53 compute-0 sudo[249252]: pam_unix(sudo:session): session closed for user root
Nov 29 06:42:54 compute-0 sudo[249277]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Nov 29 06:42:54 compute-0 sudo[249277]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:42:54 compute-0 python3.9[249239]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 29 06:42:54 compute-0 sudo[249226]: pam_unix(sudo:session): session closed for user root
Nov 29 06:42:54 compute-0 sshd-session[249071]: Invalid user bitwarden from 118.193.39.127 port 53048
Nov 29 06:42:54 compute-0 ceph-mgr[74948]: [balancer INFO root] Optimize plan auto_2025-11-29_06:42:54
Nov 29 06:42:54 compute-0 ceph-mgr[74948]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 06:42:54 compute-0 ceph-mgr[74948]: [balancer INFO root] do_upmap
Nov 29 06:42:54 compute-0 ceph-mgr[74948]: [balancer INFO root] pools ['default.rgw.meta', 'images', '.rgw.root', 'cephfs.cephfs.meta', 'vms', 'default.rgw.log', 'volumes', 'backups', 'default.rgw.control', 'cephfs.cephfs.data', '.mgr']
Nov 29 06:42:54 compute-0 ceph-mgr[74948]: [balancer INFO root] prepared 0/10 changes
Nov 29 06:42:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:42:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:42:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:42:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:42:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:42:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:42:54 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:42:54 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:42:54 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:42:54.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:42:54 compute-0 sshd-session[249071]: Received disconnect from 118.193.39.127 port 53048:11: Bye Bye [preauth]
Nov 29 06:42:54 compute-0 sshd-session[249071]: Disconnected from invalid user bitwarden 118.193.39.127 port 53048 [preauth]
Nov 29 06:42:54 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:42:54 compute-0 podman[249367]: 2025-11-29 06:42:54.438176088 +0000 UTC m=+0.047070661 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:42:54 compute-0 ceph-mon[74654]: pgmap v911: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:42:54 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:42:54 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 06:42:54 compute-0 ceph-mon[74654]: pgmap v912: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:42:54 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:42:54 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 06:42:54 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v914: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:42:54 compute-0 sudo[249507]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cvojjyynbtdkufppxxiocszyilbziywl ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764398574.5708995-4107-168521872814270/AnsiballZ_edpm_container_manage.py'
Nov 29 06:42:54 compute-0 sudo[249507]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:42:55 compute-0 python3[249509]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False
Nov 29 06:42:55 compute-0 podman[249367]: 2025-11-29 06:42:55.617609649 +0000 UTC m=+1.226504172 container create 61a53c3dbd62fa12dcd49b26a6576fa9da205e1a33bc18ffe84550ade4e32e27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_kirch, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 29 06:42:55 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:42:55 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:42:55 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:42:55.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:42:56 compute-0 systemd[1]: Started libpod-conmon-61a53c3dbd62fa12dcd49b26a6576fa9da205e1a33bc18ffe84550ade4e32e27.scope.
Nov 29 06:42:56 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:42:56 compute-0 podman[249367]: 2025-11-29 06:42:56.269357518 +0000 UTC m=+1.878252101 container init 61a53c3dbd62fa12dcd49b26a6576fa9da205e1a33bc18ffe84550ade4e32e27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_kirch, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 06:42:56 compute-0 podman[249367]: 2025-11-29 06:42:56.282393576 +0000 UTC m=+1.891288109 container start 61a53c3dbd62fa12dcd49b26a6576fa9da205e1a33bc18ffe84550ade4e32e27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_kirch, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS)
Nov 29 06:42:56 compute-0 admiring_kirch[249536]: 167 167
Nov 29 06:42:56 compute-0 systemd[1]: libpod-61a53c3dbd62fa12dcd49b26a6576fa9da205e1a33bc18ffe84550ade4e32e27.scope: Deactivated successfully.
Nov 29 06:42:56 compute-0 conmon[249536]: conmon 61a53c3dbd62fa12dcd4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-61a53c3dbd62fa12dcd49b26a6576fa9da205e1a33bc18ffe84550ade4e32e27.scope/container/memory.events
Nov 29 06:42:56 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:42:56 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:42:56 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:42:56.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:42:56 compute-0 ceph-mon[74654]: pgmap v913: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:42:56 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 06:42:56 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:42:56 compute-0 ceph-mon[74654]: pgmap v914: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:42:56 compute-0 podman[249367]: 2025-11-29 06:42:56.326542114 +0000 UTC m=+1.935436637 container attach 61a53c3dbd62fa12dcd49b26a6576fa9da205e1a33bc18ffe84550ade4e32e27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_kirch, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 06:42:56 compute-0 podman[249367]: 2025-11-29 06:42:56.327192972 +0000 UTC m=+1.936087465 container died 61a53c3dbd62fa12dcd49b26a6576fa9da205e1a33bc18ffe84550ade4e32e27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_kirch, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 06:42:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-b3938f4c4b2856b4d0b3538d3909da9246a6c305cb2f84f7c9a1dc59313dfef7-merged.mount: Deactivated successfully.
Nov 29 06:42:56 compute-0 podman[249367]: 2025-11-29 06:42:56.400968717 +0000 UTC m=+2.009863220 container remove 61a53c3dbd62fa12dcd49b26a6576fa9da205e1a33bc18ffe84550ade4e32e27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_kirch, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 06:42:56 compute-0 systemd[1]: libpod-conmon-61a53c3dbd62fa12dcd49b26a6576fa9da205e1a33bc18ffe84550ade4e32e27.scope: Deactivated successfully.
Nov 29 06:42:56 compute-0 podman[249553]: 2025-11-29 06:42:56.414819509 +0000 UTC m=+0.098730642 container create e2ad515a2dbc402235ed00e4020353b5a12eaf8adb18cd2c92ca85ab5e8c64a4 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team)
Nov 29 06:42:56 compute-0 podman[249553]: 2025-11-29 06:42:56.356694086 +0000 UTC m=+0.040605329 image pull b65793e7266422f5b94c32d109b906c8ffd974cf2ddf0b6929e463e29e05864a quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Nov 29 06:42:56 compute-0 python3[249509]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start
Nov 29 06:42:56 compute-0 sudo[249507]: pam_unix(sudo:session): session closed for user root
Nov 29 06:42:56 compute-0 podman[249601]: 2025-11-29 06:42:56.581944951 +0000 UTC m=+0.058179726 container create 7712a5f5cebf9fc315adbb7d9ff5f4c32408f625fd95411c9801265c8aa0514a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_mendel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 29 06:42:56 compute-0 podman[249601]: 2025-11-29 06:42:56.55186376 +0000 UTC m=+0.028098535 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:42:56 compute-0 systemd[1]: Started libpod-conmon-7712a5f5cebf9fc315adbb7d9ff5f4c32408f625fd95411c9801265c8aa0514a.scope.
Nov 29 06:42:56 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v915: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:42:56 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:42:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/387fb579f218c2ef6ebd249585719d765b357bbe93d1c7e2cff6f72775a81a5e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 06:42:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/387fb579f218c2ef6ebd249585719d765b357bbe93d1c7e2cff6f72775a81a5e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:42:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/387fb579f218c2ef6ebd249585719d765b357bbe93d1c7e2cff6f72775a81a5e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:42:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/387fb579f218c2ef6ebd249585719d765b357bbe93d1c7e2cff6f72775a81a5e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 06:42:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/387fb579f218c2ef6ebd249585719d765b357bbe93d1c7e2cff6f72775a81a5e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 06:42:57 compute-0 podman[249601]: 2025-11-29 06:42:57.00477414 +0000 UTC m=+0.481008945 container init 7712a5f5cebf9fc315adbb7d9ff5f4c32408f625fd95411c9801265c8aa0514a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_mendel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 29 06:42:57 compute-0 podman[249601]: 2025-11-29 06:42:57.011796038 +0000 UTC m=+0.488030853 container start 7712a5f5cebf9fc315adbb7d9ff5f4c32408f625fd95411c9801265c8aa0514a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_mendel, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 29 06:42:57 compute-0 sudo[249782]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evhcfmvmtetcokmjrotrzungwgeueutp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398576.8103175-4131-272989207786248/AnsiballZ_stat.py'
Nov 29 06:42:57 compute-0 sudo[249782]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:42:57 compute-0 podman[249601]: 2025-11-29 06:42:57.230092637 +0000 UTC m=+0.706327432 container attach 7712a5f5cebf9fc315adbb7d9ff5f4c32408f625fd95411c9801265c8aa0514a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_mendel, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3)
Nov 29 06:42:57 compute-0 python3.9[249784]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 06:42:57 compute-0 sudo[249782]: pam_unix(sudo:session): session closed for user root
Nov 29 06:42:57 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:42:57 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:42:57 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:42:57.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:42:57 compute-0 ceph-mon[74654]: pgmap v915: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:42:57 compute-0 flamboyant_mendel[249651]: --> passed data devices: 0 physical, 1 LVM
Nov 29 06:42:57 compute-0 flamboyant_mendel[249651]: --> relative data size: 1.0
Nov 29 06:42:57 compute-0 flamboyant_mendel[249651]: --> All data devices are unavailable
Nov 29 06:42:57 compute-0 systemd[1]: libpod-7712a5f5cebf9fc315adbb7d9ff5f4c32408f625fd95411c9801265c8aa0514a.scope: Deactivated successfully.
Nov 29 06:42:57 compute-0 podman[249601]: 2025-11-29 06:42:57.815184902 +0000 UTC m=+1.291419707 container died 7712a5f5cebf9fc315adbb7d9ff5f4c32408f625fd95411c9801265c8aa0514a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_mendel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 06:42:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-387fb579f218c2ef6ebd249585719d765b357bbe93d1c7e2cff6f72775a81a5e-merged.mount: Deactivated successfully.
Nov 29 06:42:57 compute-0 podman[249601]: 2025-11-29 06:42:57.879657814 +0000 UTC m=+1.355892589 container remove 7712a5f5cebf9fc315adbb7d9ff5f4c32408f625fd95411c9801265c8aa0514a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_mendel, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 06:42:57 compute-0 systemd[1]: libpod-conmon-7712a5f5cebf9fc315adbb7d9ff5f4c32408f625fd95411c9801265c8aa0514a.scope: Deactivated successfully.
Nov 29 06:42:57 compute-0 sudo[249277]: pam_unix(sudo:session): session closed for user root
Nov 29 06:42:57 compute-0 sudo[249912]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:42:57 compute-0 sudo[249912]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:42:57 compute-0 sudo[249912]: pam_unix(sudo:session): session closed for user root
Nov 29 06:42:58 compute-0 sudo[249961]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:42:58 compute-0 sudo[249961]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:42:58 compute-0 sudo[249961]: pam_unix(sudo:session): session closed for user root
Nov 29 06:42:58 compute-0 sudo[250011]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mngwurcdktdnhcmeijveyjmlglpkwvkg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398577.777815-4158-223991356795038/AnsiballZ_file.py'
Nov 29 06:42:58 compute-0 sudo[250011]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:42:58 compute-0 sudo[250013]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:42:58 compute-0 sudo[250013]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:42:58 compute-0 sudo[250013]: pam_unix(sudo:session): session closed for user root
Nov 29 06:42:58 compute-0 sudo[250040]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -- lvm list --format json
Nov 29 06:42:58 compute-0 sudo[250040]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:42:58 compute-0 python3.9[250019]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:42:58 compute-0 sudo[250011]: pam_unix(sudo:session): session closed for user root
Nov 29 06:42:58 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:42:58 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:42:58 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:42:58.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:42:58 compute-0 podman[250155]: 2025-11-29 06:42:58.498799911 +0000 UTC m=+0.040730292 container create cfe07ba786fa8b9d8069cbd423488795d039f5f0057ddde9ff8d5a4b7cdcbaf4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_thompson, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 06:42:58 compute-0 systemd[1]: Started libpod-conmon-cfe07ba786fa8b9d8069cbd423488795d039f5f0057ddde9ff8d5a4b7cdcbaf4.scope.
Nov 29 06:42:58 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:42:58 compute-0 podman[250155]: 2025-11-29 06:42:58.574070019 +0000 UTC m=+0.116000420 container init cfe07ba786fa8b9d8069cbd423488795d039f5f0057ddde9ff8d5a4b7cdcbaf4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_thompson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 29 06:42:58 compute-0 podman[250155]: 2025-11-29 06:42:58.479119325 +0000 UTC m=+0.021049736 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:42:58 compute-0 podman[250155]: 2025-11-29 06:42:58.58119716 +0000 UTC m=+0.123127541 container start cfe07ba786fa8b9d8069cbd423488795d039f5f0057ddde9ff8d5a4b7cdcbaf4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_thompson, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 06:42:58 compute-0 podman[250155]: 2025-11-29 06:42:58.585521182 +0000 UTC m=+0.127451563 container attach cfe07ba786fa8b9d8069cbd423488795d039f5f0057ddde9ff8d5a4b7cdcbaf4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_thompson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 29 06:42:58 compute-0 naughty_thompson[250195]: 167 167
Nov 29 06:42:58 compute-0 systemd[1]: libpod-cfe07ba786fa8b9d8069cbd423488795d039f5f0057ddde9ff8d5a4b7cdcbaf4.scope: Deactivated successfully.
Nov 29 06:42:58 compute-0 podman[250155]: 2025-11-29 06:42:58.590215355 +0000 UTC m=+0.132145746 container died cfe07ba786fa8b9d8069cbd423488795d039f5f0057ddde9ff8d5a4b7cdcbaf4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_thompson, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 29 06:42:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-838e9d882b3a74df8253e100d8a6de1d57eb4b7081db25e329035e00d7f39ca2-merged.mount: Deactivated successfully.
Nov 29 06:42:58 compute-0 podman[250155]: 2025-11-29 06:42:58.631269205 +0000 UTC m=+0.173199586 container remove cfe07ba786fa8b9d8069cbd423488795d039f5f0057ddde9ff8d5a4b7cdcbaf4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_thompson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 06:42:58 compute-0 systemd[1]: libpod-conmon-cfe07ba786fa8b9d8069cbd423488795d039f5f0057ddde9ff8d5a4b7cdcbaf4.scope: Deactivated successfully.
Nov 29 06:42:58 compute-0 podman[250200]: 2025-11-29 06:42:58.697571809 +0000 UTC m=+0.071452260 container health_status 843911ed0b6203707f0633a7e737420fbf54d55170a2d9cdc86db1752ff76af8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 06:42:58 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v916: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:42:58 compute-0 podman[250286]: 2025-11-29 06:42:58.806497957 +0000 UTC m=+0.046346311 container create a95c5f7e7bda1bfa6177f4194e7d8735f81e9a2fdd50bd9b7c3310ab6252b58b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_austin, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 29 06:42:58 compute-0 sudo[250326]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aptxhhkekybqgpqvldvwnsztjkxalwoo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398578.3347924-4158-76573261778431/AnsiballZ_copy.py'
Nov 29 06:42:58 compute-0 sudo[250326]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:42:58 compute-0 systemd[1]: Started libpod-conmon-a95c5f7e7bda1bfa6177f4194e7d8735f81e9a2fdd50bd9b7c3310ab6252b58b.scope.
Nov 29 06:42:58 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:42:58 compute-0 podman[250286]: 2025-11-29 06:42:58.788086437 +0000 UTC m=+0.027934821 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:42:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2fcc50abcf56f6748d08f1a73ad6d682121fed7e56efbe975c5f385f6c9c9ed/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 06:42:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2fcc50abcf56f6748d08f1a73ad6d682121fed7e56efbe975c5f385f6c9c9ed/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:42:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2fcc50abcf56f6748d08f1a73ad6d682121fed7e56efbe975c5f385f6c9c9ed/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:42:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2fcc50abcf56f6748d08f1a73ad6d682121fed7e56efbe975c5f385f6c9c9ed/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 06:42:58 compute-0 podman[250286]: 2025-11-29 06:42:58.897727335 +0000 UTC m=+0.137575709 container init a95c5f7e7bda1bfa6177f4194e7d8735f81e9a2fdd50bd9b7c3310ab6252b58b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_austin, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 29 06:42:58 compute-0 podman[250286]: 2025-11-29 06:42:58.910572768 +0000 UTC m=+0.150421122 container start a95c5f7e7bda1bfa6177f4194e7d8735f81e9a2fdd50bd9b7c3310ab6252b58b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_austin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 06:42:58 compute-0 podman[250286]: 2025-11-29 06:42:58.91417112 +0000 UTC m=+0.154019574 container attach a95c5f7e7bda1bfa6177f4194e7d8735f81e9a2fdd50bd9b7c3310ab6252b58b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_austin, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 06:42:59 compute-0 python3.9[250328]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764398578.3347924-4158-76573261778431/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 06:42:59 compute-0 sudo[250326]: pam_unix(sudo:session): session closed for user root
Nov 29 06:42:59 compute-0 sudo[250411]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gtabjqgbyrjrbfuqnjcdictinwurgcqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398578.3347924-4158-76573261778431/AnsiballZ_systemd.py'
Nov 29 06:42:59 compute-0 sudo[250411]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:42:59 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:42:59 compute-0 python3.9[250413]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 29 06:42:59 compute-0 systemd[1]: Reloading.
Nov 29 06:42:59 compute-0 systemd-rc-local-generator[250443]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 06:42:59 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:42:59 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:42:59 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:42:59.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:42:59 compute-0 systemd-sysv-generator[250448]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 06:42:59 compute-0 zealous_austin[250332]: {
Nov 29 06:42:59 compute-0 zealous_austin[250332]:     "1": [
Nov 29 06:42:59 compute-0 zealous_austin[250332]:         {
Nov 29 06:42:59 compute-0 zealous_austin[250332]:             "devices": [
Nov 29 06:42:59 compute-0 zealous_austin[250332]:                 "/dev/loop3"
Nov 29 06:42:59 compute-0 zealous_austin[250332]:             ],
Nov 29 06:42:59 compute-0 zealous_austin[250332]:             "lv_name": "ceph_lv0",
Nov 29 06:42:59 compute-0 zealous_austin[250332]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 06:42:59 compute-0 zealous_austin[250332]:             "lv_size": "7511998464",
Nov 29 06:42:59 compute-0 zealous_austin[250332]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=336ec58c-893b-528f-a0c1-6ed1196bc047,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=91f280f1-e534-4adc-bf70-98711580c2dd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 06:42:59 compute-0 zealous_austin[250332]:             "lv_uuid": "G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP",
Nov 29 06:42:59 compute-0 zealous_austin[250332]:             "name": "ceph_lv0",
Nov 29 06:42:59 compute-0 zealous_austin[250332]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 06:42:59 compute-0 zealous_austin[250332]:             "tags": {
Nov 29 06:42:59 compute-0 zealous_austin[250332]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 06:42:59 compute-0 zealous_austin[250332]:                 "ceph.block_uuid": "G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP",
Nov 29 06:42:59 compute-0 zealous_austin[250332]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 06:42:59 compute-0 zealous_austin[250332]:                 "ceph.cluster_fsid": "336ec58c-893b-528f-a0c1-6ed1196bc047",
Nov 29 06:42:59 compute-0 zealous_austin[250332]:                 "ceph.cluster_name": "ceph",
Nov 29 06:42:59 compute-0 zealous_austin[250332]:                 "ceph.crush_device_class": "",
Nov 29 06:42:59 compute-0 zealous_austin[250332]:                 "ceph.encrypted": "0",
Nov 29 06:42:59 compute-0 zealous_austin[250332]:                 "ceph.osd_fsid": "91f280f1-e534-4adc-bf70-98711580c2dd",
Nov 29 06:42:59 compute-0 zealous_austin[250332]:                 "ceph.osd_id": "1",
Nov 29 06:42:59 compute-0 zealous_austin[250332]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 06:42:59 compute-0 zealous_austin[250332]:                 "ceph.type": "block",
Nov 29 06:42:59 compute-0 zealous_austin[250332]:                 "ceph.vdo": "0"
Nov 29 06:42:59 compute-0 zealous_austin[250332]:             },
Nov 29 06:42:59 compute-0 zealous_austin[250332]:             "type": "block",
Nov 29 06:42:59 compute-0 zealous_austin[250332]:             "vg_name": "ceph_vg0"
Nov 29 06:42:59 compute-0 zealous_austin[250332]:         }
Nov 29 06:42:59 compute-0 zealous_austin[250332]:     ]
Nov 29 06:42:59 compute-0 zealous_austin[250332]: }
Nov 29 06:42:59 compute-0 ceph-mon[74654]: pgmap v916: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:42:59 compute-0 podman[250286]: 2025-11-29 06:42:59.789675782 +0000 UTC m=+1.029524136 container died a95c5f7e7bda1bfa6177f4194e7d8735f81e9a2fdd50bd9b7c3310ab6252b58b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_austin, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True)
Nov 29 06:42:59 compute-0 systemd[1]: libpod-a95c5f7e7bda1bfa6177f4194e7d8735f81e9a2fdd50bd9b7c3310ab6252b58b.scope: Deactivated successfully.
Nov 29 06:42:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-e2fcc50abcf56f6748d08f1a73ad6d682121fed7e56efbe975c5f385f6c9c9ed-merged.mount: Deactivated successfully.
Nov 29 06:42:59 compute-0 sudo[250411]: pam_unix(sudo:session): session closed for user root
Nov 29 06:43:00 compute-0 podman[250286]: 2025-11-29 06:43:00.000642104 +0000 UTC m=+1.240490488 container remove a95c5f7e7bda1bfa6177f4194e7d8735f81e9a2fdd50bd9b7c3310ab6252b58b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_austin, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2)
Nov 29 06:43:00 compute-0 systemd[1]: libpod-conmon-a95c5f7e7bda1bfa6177f4194e7d8735f81e9a2fdd50bd9b7c3310ab6252b58b.scope: Deactivated successfully.
Nov 29 06:43:00 compute-0 sudo[250040]: pam_unix(sudo:session): session closed for user root
Nov 29 06:43:00 compute-0 sudo[250468]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:43:00 compute-0 sudo[250468]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:43:00 compute-0 sudo[250468]: pam_unix(sudo:session): session closed for user root
Nov 29 06:43:00 compute-0 sudo[250516]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:43:00 compute-0 sudo[250516]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:43:00 compute-0 sudo[250516]: pam_unix(sudo:session): session closed for user root
Nov 29 06:43:00 compute-0 sudo[250565]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:43:00 compute-0 sudo[250565]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:43:00 compute-0 sudo[250565]: pam_unix(sudo:session): session closed for user root
Nov 29 06:43:00 compute-0 sudo[250616]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-beuctmkfklrhlnmfcgqfydzeeokygbvt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398578.3347924-4158-76573261778431/AnsiballZ_systemd.py'
Nov 29 06:43:00 compute-0 sudo[250616]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:43:00 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:43:00 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:43:00 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:43:00.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:43:00 compute-0 sudo[250617]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -- raw list --format json
Nov 29 06:43:00 compute-0 sudo[250617]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:43:00 compute-0 python3.9[250624]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 06:43:00 compute-0 systemd[1]: Reloading.
Nov 29 06:43:00 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v917: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:43:00 compute-0 podman[250687]: 2025-11-29 06:43:00.741449628 +0000 UTC m=+0.060933143 container create f5c242c1afae0fb136df49822ade5958195ccfebc2530e3c26b8dc81a4bb3838 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_jepsen, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 29 06:43:00 compute-0 podman[250687]: 2025-11-29 06:43:00.720121165 +0000 UTC m=+0.039604720 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:43:00 compute-0 systemd-rc-local-generator[250733]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 06:43:00 compute-0 systemd-sysv-generator[250736]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 06:43:01 compute-0 systemd[1]: Started libpod-conmon-f5c242c1afae0fb136df49822ade5958195ccfebc2530e3c26b8dc81a4bb3838.scope.
Nov 29 06:43:01 compute-0 systemd[1]: Starting nova_compute container...
Nov 29 06:43:01 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:43:01 compute-0 podman[250687]: 2025-11-29 06:43:01.126678125 +0000 UTC m=+0.446161730 container init f5c242c1afae0fb136df49822ade5958195ccfebc2530e3c26b8dc81a4bb3838 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_jepsen, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 29 06:43:01 compute-0 podman[250687]: 2025-11-29 06:43:01.141946716 +0000 UTC m=+0.461430261 container start f5c242c1afae0fb136df49822ade5958195ccfebc2530e3c26b8dc81a4bb3838 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_jepsen, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 06:43:01 compute-0 elated_jepsen[250743]: 167 167
Nov 29 06:43:01 compute-0 systemd[1]: libpod-f5c242c1afae0fb136df49822ade5958195ccfebc2530e3c26b8dc81a4bb3838.scope: Deactivated successfully.
Nov 29 06:43:01 compute-0 podman[250687]: 2025-11-29 06:43:01.15056713 +0000 UTC m=+0.470050675 container attach f5c242c1afae0fb136df49822ade5958195ccfebc2530e3c26b8dc81a4bb3838 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_jepsen, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 06:43:01 compute-0 podman[250687]: 2025-11-29 06:43:01.151530347 +0000 UTC m=+0.471013872 container died f5c242c1afae0fb136df49822ade5958195ccfebc2530e3c26b8dc81a4bb3838 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_jepsen, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 06:43:01 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:43:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-8905dc5f8b08d98e39bfe12b606aa0f2981fcb63de57eff13463c088bf372ca0-merged.mount: Deactivated successfully.
Nov 29 06:43:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d797c2b4e3996a56f9f8a6e9a63d3adde8833aeb2ad9cc0fab53d65e4c7eafbb/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 29 06:43:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d797c2b4e3996a56f9f8a6e9a63d3adde8833aeb2ad9cc0fab53d65e4c7eafbb/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Nov 29 06:43:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d797c2b4e3996a56f9f8a6e9a63d3adde8833aeb2ad9cc0fab53d65e4c7eafbb/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 29 06:43:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d797c2b4e3996a56f9f8a6e9a63d3adde8833aeb2ad9cc0fab53d65e4c7eafbb/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Nov 29 06:43:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d797c2b4e3996a56f9f8a6e9a63d3adde8833aeb2ad9cc0fab53d65e4c7eafbb/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 29 06:43:01 compute-0 podman[250687]: 2025-11-29 06:43:01.195553721 +0000 UTC m=+0.515037236 container remove f5c242c1afae0fb136df49822ade5958195ccfebc2530e3c26b8dc81a4bb3838 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_jepsen, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 06:43:01 compute-0 podman[250744]: 2025-11-29 06:43:01.20542827 +0000 UTC m=+0.122262566 container init e2ad515a2dbc402235ed00e4020353b5a12eaf8adb18cd2c92ca85ab5e8c64a4 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 06:43:01 compute-0 systemd[1]: libpod-conmon-f5c242c1afae0fb136df49822ade5958195ccfebc2530e3c26b8dc81a4bb3838.scope: Deactivated successfully.
Nov 29 06:43:01 compute-0 podman[250744]: 2025-11-29 06:43:01.218037817 +0000 UTC m=+0.134872063 container start e2ad515a2dbc402235ed00e4020353b5a12eaf8adb18cd2c92ca85ab5e8c64a4 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 06:43:01 compute-0 podman[250744]: nova_compute
Nov 29 06:43:01 compute-0 nova_compute[250764]: + sudo -E kolla_set_configs
Nov 29 06:43:01 compute-0 systemd[1]: Started nova_compute container.
Nov 29 06:43:01 compute-0 sudo[250616]: pam_unix(sudo:session): session closed for user root
Nov 29 06:43:01 compute-0 nova_compute[250764]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 29 06:43:01 compute-0 nova_compute[250764]: INFO:__main__:Validating config file
Nov 29 06:43:01 compute-0 nova_compute[250764]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 29 06:43:01 compute-0 nova_compute[250764]: INFO:__main__:Copying service configuration files
Nov 29 06:43:01 compute-0 nova_compute[250764]: INFO:__main__:Deleting /etc/nova/nova.conf
Nov 29 06:43:01 compute-0 nova_compute[250764]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Nov 29 06:43:01 compute-0 nova_compute[250764]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Nov 29 06:43:01 compute-0 nova_compute[250764]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Nov 29 06:43:01 compute-0 nova_compute[250764]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Nov 29 06:43:01 compute-0 nova_compute[250764]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 29 06:43:01 compute-0 nova_compute[250764]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 29 06:43:01 compute-0 nova_compute[250764]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 29 06:43:01 compute-0 nova_compute[250764]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 29 06:43:01 compute-0 nova_compute[250764]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Nov 29 06:43:01 compute-0 nova_compute[250764]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Nov 29 06:43:01 compute-0 nova_compute[250764]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 29 06:43:01 compute-0 nova_compute[250764]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 29 06:43:01 compute-0 nova_compute[250764]: INFO:__main__:Deleting /etc/ceph
Nov 29 06:43:01 compute-0 nova_compute[250764]: INFO:__main__:Creating directory /etc/ceph
Nov 29 06:43:01 compute-0 nova_compute[250764]: INFO:__main__:Setting permission for /etc/ceph
Nov 29 06:43:01 compute-0 nova_compute[250764]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Nov 29 06:43:01 compute-0 nova_compute[250764]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 29 06:43:01 compute-0 nova_compute[250764]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Nov 29 06:43:01 compute-0 nova_compute[250764]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 29 06:43:01 compute-0 nova_compute[250764]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Nov 29 06:43:01 compute-0 nova_compute[250764]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 29 06:43:01 compute-0 nova_compute[250764]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Nov 29 06:43:01 compute-0 nova_compute[250764]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 29 06:43:01 compute-0 nova_compute[250764]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Nov 29 06:43:01 compute-0 nova_compute[250764]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Nov 29 06:43:01 compute-0 nova_compute[250764]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Nov 29 06:43:01 compute-0 nova_compute[250764]: INFO:__main__:Writing out command to execute
Nov 29 06:43:01 compute-0 nova_compute[250764]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 29 06:43:01 compute-0 nova_compute[250764]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 29 06:43:01 compute-0 nova_compute[250764]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Nov 29 06:43:01 compute-0 nova_compute[250764]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 29 06:43:01 compute-0 nova_compute[250764]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 29 06:43:01 compute-0 nova_compute[250764]: ++ cat /run_command
Nov 29 06:43:01 compute-0 podman[250793]: 2025-11-29 06:43:01.356712136 +0000 UTC m=+0.022442976 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:43:01 compute-0 nova_compute[250764]: + CMD=nova-compute
Nov 29 06:43:01 compute-0 nova_compute[250764]: + ARGS=
Nov 29 06:43:01 compute-0 nova_compute[250764]: + sudo kolla_copy_cacerts
Nov 29 06:43:01 compute-0 podman[250793]: 2025-11-29 06:43:01.466373205 +0000 UTC m=+0.132103995 container create 93775b35020e7e85d2d9ff2be97365d398206ef6b14954d4fa59563a226f9221 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_noether, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 06:43:01 compute-0 nova_compute[250764]: + [[ ! -n '' ]]
Nov 29 06:43:01 compute-0 nova_compute[250764]: + . kolla_extend_start
Nov 29 06:43:01 compute-0 nova_compute[250764]: + echo 'Running command: '\''nova-compute'\'''
Nov 29 06:43:01 compute-0 nova_compute[250764]: Running command: 'nova-compute'
Nov 29 06:43:01 compute-0 nova_compute[250764]: + umask 0022
Nov 29 06:43:01 compute-0 nova_compute[250764]: + exec nova-compute
Nov 29 06:43:01 compute-0 systemd[1]: Started libpod-conmon-93775b35020e7e85d2d9ff2be97365d398206ef6b14954d4fa59563a226f9221.scope.
Nov 29 06:43:01 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:43:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2fa92367fd53169deb6f0cf7a9e1775781862c3d52081faa72166663ffe1928a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 06:43:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2fa92367fd53169deb6f0cf7a9e1775781862c3d52081faa72166663ffe1928a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:43:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2fa92367fd53169deb6f0cf7a9e1775781862c3d52081faa72166663ffe1928a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:43:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2fa92367fd53169deb6f0cf7a9e1775781862c3d52081faa72166663ffe1928a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 06:43:01 compute-0 podman[250793]: 2025-11-29 06:43:01.602859262 +0000 UTC m=+0.268590072 container init 93775b35020e7e85d2d9ff2be97365d398206ef6b14954d4fa59563a226f9221 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_noether, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 06:43:01 compute-0 podman[250793]: 2025-11-29 06:43:01.619128552 +0000 UTC m=+0.284859352 container start 93775b35020e7e85d2d9ff2be97365d398206ef6b14954d4fa59563a226f9221 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_noether, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 29 06:43:01 compute-0 podman[250793]: 2025-11-29 06:43:01.62332564 +0000 UTC m=+0.289056440 container attach 93775b35020e7e85d2d9ff2be97365d398206ef6b14954d4fa59563a226f9221 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_noether, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 06:43:01 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:43:01 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:43:01 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:43:01.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:43:01 compute-0 ceph-mon[74654]: pgmap v917: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:43:02 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:43:02 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:43:02 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:43:02.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:43:02 compute-0 compassionate_noether[250834]: {
Nov 29 06:43:02 compute-0 compassionate_noether[250834]:     "91f280f1-e534-4adc-bf70-98711580c2dd": {
Nov 29 06:43:02 compute-0 compassionate_noether[250834]:         "ceph_fsid": "336ec58c-893b-528f-a0c1-6ed1196bc047",
Nov 29 06:43:02 compute-0 compassionate_noether[250834]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 06:43:02 compute-0 compassionate_noether[250834]:         "osd_id": 1,
Nov 29 06:43:02 compute-0 compassionate_noether[250834]:         "osd_uuid": "91f280f1-e534-4adc-bf70-98711580c2dd",
Nov 29 06:43:02 compute-0 compassionate_noether[250834]:         "type": "bluestore"
Nov 29 06:43:02 compute-0 compassionate_noether[250834]:     }
Nov 29 06:43:02 compute-0 compassionate_noether[250834]: }
Nov 29 06:43:02 compute-0 systemd[1]: libpod-93775b35020e7e85d2d9ff2be97365d398206ef6b14954d4fa59563a226f9221.scope: Deactivated successfully.
Nov 29 06:43:02 compute-0 podman[250793]: 2025-11-29 06:43:02.61197188 +0000 UTC m=+1.277702660 container died 93775b35020e7e85d2d9ff2be97365d398206ef6b14954d4fa59563a226f9221 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_noether, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 06:43:02 compute-0 python3.9[250976]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 06:43:02 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v918: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:43:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-2fa92367fd53169deb6f0cf7a9e1775781862c3d52081faa72166663ffe1928a-merged.mount: Deactivated successfully.
Nov 29 06:43:03 compute-0 podman[250793]: 2025-11-29 06:43:03.086469129 +0000 UTC m=+1.752199909 container remove 93775b35020e7e85d2d9ff2be97365d398206ef6b14954d4fa59563a226f9221 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_noether, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 06:43:03 compute-0 systemd[1]: libpod-conmon-93775b35020e7e85d2d9ff2be97365d398206ef6b14954d4fa59563a226f9221.scope: Deactivated successfully.
Nov 29 06:43:03 compute-0 sudo[250617]: pam_unix(sudo:session): session closed for user root
Nov 29 06:43:03 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 06:43:03 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:43:03 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 06:43:03 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:43:03 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev b0cbca7c-1c60-43fa-86ea-43504293d67a does not exist
Nov 29 06:43:03 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev be01cf1e-96c0-4640-9bc5-27e4745b4bb2 does not exist
Nov 29 06:43:03 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev 30b101ee-8f54-4c1b-b61f-80858004a8d0 does not exist
Nov 29 06:43:03 compute-0 sudo[251020]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:43:03 compute-0 sudo[251020]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:43:03 compute-0 sudo[251020]: pam_unix(sudo:session): session closed for user root
Nov 29 06:43:03 compute-0 sudo[251045]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 06:43:03 compute-0 sudo[251045]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:43:03 compute-0 sudo[251045]: pam_unix(sudo:session): session closed for user root
Nov 29 06:43:03 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:43:03 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:43:03 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:43:03.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:43:03 compute-0 python3.9[251195]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 06:43:03 compute-0 ceph-mon[74654]: pgmap v918: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:43:03 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:43:03 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.089 250780 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.090 250780 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.090 250780 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.091 250780 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.252 250780 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.283 250780 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.031s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.284 250780 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Nov 29 06:43:04 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:43:04 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:43:04 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:43:04.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:43:04 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:43:04 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v919: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:43:04 compute-0 python3.9[251349]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.793 250780 INFO nova.virt.driver [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.951 250780 INFO nova.compute.provider_config [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.969 250780 DEBUG oslo_concurrency.lockutils [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.970 250780 DEBUG oslo_concurrency.lockutils [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.970 250780 DEBUG oslo_concurrency.lockutils [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.971 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.971 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.971 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.971 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.971 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.971 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.972 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.972 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.972 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.972 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.972 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.973 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.973 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.973 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.973 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.974 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.974 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.974 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.974 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.974 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.975 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.975 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.975 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.975 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.975 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.976 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.976 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.976 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.976 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.977 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.977 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.977 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.977 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.978 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.978 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.978 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.978 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.978 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.979 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.979 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.979 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.979 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.980 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.980 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.980 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.980 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.981 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.981 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.981 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.981 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.981 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.982 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.982 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.982 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.982 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.982 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.982 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.982 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.983 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.983 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.983 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.983 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.983 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.983 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.983 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.984 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.984 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.984 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.984 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.984 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.984 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.985 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.985 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.985 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.985 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.985 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.985 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.985 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.986 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.986 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.986 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.986 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.986 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.986 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.987 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.987 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.987 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.987 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.987 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.987 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.988 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.988 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.988 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.988 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.988 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.988 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.988 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.988 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.989 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.989 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.989 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.989 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.989 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.989 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.989 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.990 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.990 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.990 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.990 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.990 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.990 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.990 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.991 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.991 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.991 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.991 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.991 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.991 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.991 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.992 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.992 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.992 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.992 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.992 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.992 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.992 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.992 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.993 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.993 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.993 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.993 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.993 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.993 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.993 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.994 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.994 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.994 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.994 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.994 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.994 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.994 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.995 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.995 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.995 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.995 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.995 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.995 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.995 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.996 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.996 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.996 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.996 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.996 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.996 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.997 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.997 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.997 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.997 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.997 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.997 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.998 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.998 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.998 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.998 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.998 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.998 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.998 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.999 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.999 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.999 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.999 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.999 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:04 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.999 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:04.999 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.000 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.000 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.000 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.000 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.000 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.000 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.000 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.001 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.001 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.001 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.001 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.001 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.001 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.002 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.002 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.002 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.002 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.002 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.002 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.003 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.003 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.003 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.003 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.003 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.003 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.003 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.004 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.004 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.004 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.004 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.004 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.004 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.004 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.004 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.005 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.005 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.005 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.005 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.005 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.005 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.005 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.006 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.006 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.006 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.006 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.006 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.007 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.007 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.007 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.007 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.007 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.008 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.008 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.008 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.008 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.008 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.008 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.009 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.009 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.009 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.009 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.009 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.009 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.009 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.010 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.010 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.010 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.010 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.010 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.010 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.010 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.011 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.011 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.011 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.011 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.011 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.011 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.011 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.011 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.012 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.012 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.012 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.012 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.012 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.012 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.012 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.013 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.013 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.013 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.013 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.013 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.013 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.014 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.014 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.014 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.014 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.014 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.014 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.014 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.015 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.015 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.015 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.015 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.015 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.015 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.016 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.016 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.016 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.016 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.016 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.016 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.017 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.017 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.017 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.017 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.017 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.018 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.018 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.018 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.018 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.018 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.018 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.019 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.019 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.019 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.019 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.019 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.020 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.020 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.020 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.020 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.020 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.020 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.021 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.021 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.021 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.021 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.021 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.022 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.022 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.022 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.022 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.023 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.023 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.023 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.023 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.023 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.023 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.024 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.024 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.024 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.024 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.024 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.025 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.025 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.025 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.025 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.025 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.026 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.026 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.026 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.026 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.026 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.026 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.027 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.027 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.027 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.027 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.027 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.028 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.028 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.028 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.028 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.028 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.028 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.029 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.029 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.029 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.029 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.029 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.030 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.030 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.030 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.030 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.031 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.031 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.031 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.031 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.031 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.031 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.032 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.032 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.032 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.032 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.032 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.033 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.033 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.033 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.033 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.033 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.033 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.034 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.034 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.034 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.034 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.034 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.035 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.035 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.035 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.035 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.035 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.036 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.036 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.036 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.036 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.036 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.037 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.037 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.037 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.037 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.037 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.038 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.038 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.038 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.038 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.038 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.038 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.039 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.039 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.039 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.039 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.039 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.040 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.040 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.040 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.040 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.040 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.040 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.041 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.041 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.041 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.041 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.041 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.042 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.042 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.042 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.042 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.042 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.043 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.043 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.043 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.043 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.043 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.043 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.044 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.044 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.044 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.044 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.044 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.045 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.045 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.045 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.045 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.045 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.046 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.046 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.046 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.046 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.046 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.046 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.047 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.047 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.047 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.047 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.047 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.048 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.048 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.048 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.048 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.cpu_mode               = custom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.048 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.049 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.cpu_models             = ['Nehalem'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.049 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.049 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.049 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.049 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.050 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.050 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.050 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.050 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.050 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.051 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.051 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.051 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.051 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.052 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.052 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.052 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.052 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.052 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.053 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.053 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.053 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.053 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.053 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.054 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.054 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.054 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.054 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.055 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.055 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.055 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.055 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.055 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.056 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.056 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.056 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.056 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.056 250780 WARNING oslo_config.cfg [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Nov 29 06:43:05 compute-0 nova_compute[250764]: live_migration_uri is deprecated for removal in favor of two other options that
Nov 29 06:43:05 compute-0 nova_compute[250764]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Nov 29 06:43:05 compute-0 nova_compute[250764]: and ``live_migration_inbound_addr`` respectively.
Nov 29 06:43:05 compute-0 nova_compute[250764]: ).  Its value may be silently ignored in the future.
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.057 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.057 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.057 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.057 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.057 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.058 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.058 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.058 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.058 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.059 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.059 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.059 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.059 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.059 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.059 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.060 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.060 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.060 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.060 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.rbd_secret_uuid        = 336ec58c-893b-528f-a0c1-6ed1196bc047 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.060 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.061 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.061 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.061 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.061 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.061 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.062 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.062 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.062 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.062 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.062 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.062 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.063 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.063 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.063 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.063 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.063 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.064 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.064 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.064 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.064 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.064 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.065 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.065 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.065 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.065 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.065 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.065 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.066 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.066 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.066 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.066 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.066 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.066 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.067 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.067 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.067 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.067 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.067 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.067 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.067 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.067 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.068 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.068 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.068 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.068 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.068 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.068 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.068 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.069 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.069 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.069 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.069 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.069 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.069 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.070 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.070 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.070 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.070 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.070 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.071 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.071 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.071 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.071 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.071 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.072 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.072 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.072 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.072 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.073 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.073 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.073 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.073 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.073 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.073 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.074 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.074 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.074 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.074 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.074 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.074 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.075 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.075 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.075 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.075 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.076 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.076 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.076 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.076 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.076 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.076 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.077 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.077 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.077 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.077 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.077 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.078 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.078 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.078 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.078 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.078 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.079 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.079 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.079 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.079 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.079 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.080 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.080 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.080 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.080 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.080 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.081 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.081 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.081 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.081 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.081 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.082 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.082 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.082 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.082 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.082 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.083 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.083 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.083 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.083 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.083 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.083 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.083 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.084 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.084 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.084 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.084 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.084 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.084 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.085 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.085 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.085 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.085 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.085 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.086 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.086 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.086 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.086 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.087 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.087 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.087 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.087 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.087 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.087 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.088 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.088 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.088 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.088 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.088 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.089 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.089 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.089 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.089 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.089 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.090 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.090 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.090 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.090 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.090 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.091 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.091 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.091 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.091 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.091 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.092 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.092 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.092 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.092 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.092 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.092 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.093 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.093 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.093 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.093 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.093 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.094 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.094 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.094 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.094 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.094 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.095 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.095 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.095 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.095 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.095 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.096 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.096 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.096 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.096 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.096 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.097 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.097 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.097 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.097 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.097 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.098 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.098 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.098 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.098 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.098 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.099 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.099 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.099 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.099 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.099 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.100 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.100 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.100 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.100 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.100 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.101 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.101 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.101 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.101 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.101 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.102 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.102 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.102 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.102 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.102 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.103 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.103 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.103 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.103 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.104 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.104 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.104 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.104 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.105 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.105 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.105 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.105 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.106 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.106 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.106 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.106 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.106 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.107 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.107 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.107 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.107 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.107 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.108 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.108 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.108 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.108 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.108 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.109 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.109 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.109 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.109 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.109 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.109 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.110 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.110 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.110 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.110 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.110 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.111 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.111 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.111 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.111 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.111 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.112 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.112 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.112 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.112 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.113 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.113 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.113 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.113 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.113 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.114 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.114 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.114 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.114 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.114 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.115 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.115 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.115 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.115 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.115 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.115 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.116 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.116 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.116 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.116 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.116 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.117 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.117 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.117 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.117 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.118 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.118 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.118 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.118 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.118 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.119 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.119 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.119 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.119 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.119 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.119 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.120 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.120 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.120 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.120 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.120 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.121 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.121 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.121 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.121 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.121 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.122 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.122 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.122 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.122 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.122 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.123 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.123 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.123 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.123 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.123 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.124 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.124 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.124 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.124 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.124 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.125 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.125 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.125 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.125 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.125 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.126 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.126 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.126 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.126 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.126 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.127 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.127 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.127 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.127 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.127 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.127 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.128 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.128 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.128 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.128 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.128 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.129 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.129 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.129 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.129 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.129 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.129 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.130 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.130 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.130 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.130 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.130 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.131 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.131 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.131 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.131 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.132 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.132 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.132 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.132 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.132 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.133 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.133 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.133 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.133 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.133 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.133 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.134 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.134 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.134 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.134 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.134 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.134 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.135 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.135 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.135 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.135 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.135 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.136 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.136 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.136 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.136 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.136 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.137 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.137 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.137 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.137 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.138 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.138 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.138 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.138 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.138 250780 DEBUG oslo_service.service [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.139 250780 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.162 250780 DEBUG nova.virt.libvirt.host [None req-0030f89b-8686-48a6-a1ec-9c3ff8f4b6a3 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.163 250780 DEBUG nova.virt.libvirt.host [None req-0030f89b-8686-48a6-a1ec-9c3ff8f4b6a3 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.163 250780 DEBUG nova.virt.libvirt.host [None req-0030f89b-8686-48a6-a1ec-9c3ff8f4b6a3 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.163 250780 DEBUG nova.virt.libvirt.host [None req-0030f89b-8686-48a6-a1ec-9c3ff8f4b6a3 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Nov 29 06:43:05 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Nov 29 06:43:05 compute-0 systemd[1]: Started libvirt QEMU daemon.
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.231 250780 DEBUG nova.virt.libvirt.host [None req-0030f89b-8686-48a6-a1ec-9c3ff8f4b6a3 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7fb5007db7f0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.235 250780 DEBUG nova.virt.libvirt.host [None req-0030f89b-8686-48a6-a1ec-9c3ff8f4b6a3 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7fb5007db7f0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.236 250780 INFO nova.virt.libvirt.driver [None req-0030f89b-8686-48a6-a1ec-9c3ff8f4b6a3 - - - - - -] Connection event '1' reason 'None'
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.259 250780 WARNING nova.virt.libvirt.driver [None req-0030f89b-8686-48a6-a1ec-9c3ff8f4b6a3 - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Nov 29 06:43:05 compute-0 nova_compute[250764]: 2025-11-29 06:43:05.259 250780 DEBUG nova.virt.libvirt.volume.mount [None req-0030f89b-8686-48a6-a1ec-9c3ff8f4b6a3 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Nov 29 06:43:05 compute-0 sudo[251552]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pqjtzxasxydkuhhlxtpmyadfywpezbzi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398585.188375-4338-207527213630799/AnsiballZ_podman_container.py'
Nov 29 06:43:05 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:43:05 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:43:05 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:43:05.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:43:05 compute-0 sudo[251552]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:43:05 compute-0 ceph-mon[74654]: pgmap v919: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:43:05 compute-0 python3.9[251554]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Nov 29 06:43:06 compute-0 sudo[251552]: pam_unix(sudo:session): session closed for user root
Nov 29 06:43:06 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 06:43:06 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 06:43:06 compute-0 nova_compute[250764]: 2025-11-29 06:43:06.196 250780 INFO nova.virt.libvirt.host [None req-0030f89b-8686-48a6-a1ec-9c3ff8f4b6a3 - - - - - -] Libvirt host capabilities <capabilities>
Nov 29 06:43:06 compute-0 nova_compute[250764]: 
Nov 29 06:43:06 compute-0 nova_compute[250764]:   <host>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     <uuid>c87c7517-e569-4e42-8023-b11f25bc4e0c</uuid>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     <cpu>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <arch>x86_64</arch>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model>EPYC-Rome-v4</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <vendor>AMD</vendor>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <microcode version='16777317'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <signature family='23' model='49' stepping='0'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <maxphysaddr mode='emulate' bits='40'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <feature name='x2apic'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <feature name='tsc-deadline'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <feature name='osxsave'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <feature name='hypervisor'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <feature name='tsc_adjust'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <feature name='spec-ctrl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <feature name='stibp'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <feature name='arch-capabilities'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <feature name='ssbd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <feature name='cmp_legacy'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <feature name='topoext'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <feature name='virt-ssbd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <feature name='lbrv'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <feature name='tsc-scale'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <feature name='vmcb-clean'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <feature name='pause-filter'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <feature name='pfthreshold'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <feature name='svme-addr-chk'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <feature name='rdctl-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <feature name='skip-l1dfl-vmentry'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <feature name='mds-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <feature name='pschange-mc-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <pages unit='KiB' size='4'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <pages unit='KiB' size='2048'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <pages unit='KiB' size='1048576'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     </cpu>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     <power_management>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <suspend_mem/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     </power_management>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     <iommu support='no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     <migration_features>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <live/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <uri_transports>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <uri_transport>tcp</uri_transport>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <uri_transport>rdma</uri_transport>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </uri_transports>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     </migration_features>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     <topology>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <cells num='1'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <cell id='0'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:           <memory unit='KiB'>7864324</memory>
Nov 29 06:43:06 compute-0 nova_compute[250764]:           <pages unit='KiB' size='4'>1966081</pages>
Nov 29 06:43:06 compute-0 nova_compute[250764]:           <pages unit='KiB' size='2048'>0</pages>
Nov 29 06:43:06 compute-0 nova_compute[250764]:           <pages unit='KiB' size='1048576'>0</pages>
Nov 29 06:43:06 compute-0 nova_compute[250764]:           <distances>
Nov 29 06:43:06 compute-0 nova_compute[250764]:             <sibling id='0' value='10'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:           </distances>
Nov 29 06:43:06 compute-0 nova_compute[250764]:           <cpus num='8'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:           </cpus>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         </cell>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </cells>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     </topology>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     <cache>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     </cache>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     <secmodel>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model>selinux</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <doi>0</doi>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     </secmodel>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     <secmodel>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model>dac</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <doi>0</doi>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <baselabel type='kvm'>+107:+107</baselabel>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <baselabel type='qemu'>+107:+107</baselabel>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     </secmodel>
Nov 29 06:43:06 compute-0 nova_compute[250764]:   </host>
Nov 29 06:43:06 compute-0 nova_compute[250764]: 
Nov 29 06:43:06 compute-0 nova_compute[250764]:   <guest>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     <os_type>hvm</os_type>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     <arch name='i686'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <wordsize>32</wordsize>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <domain type='qemu'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <domain type='kvm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     </arch>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     <features>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <pae/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <nonpae/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <acpi default='on' toggle='yes'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <apic default='on' toggle='no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <cpuselection/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <deviceboot/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <disksnapshot default='on' toggle='no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <externalSnapshot/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     </features>
Nov 29 06:43:06 compute-0 nova_compute[250764]:   </guest>
Nov 29 06:43:06 compute-0 nova_compute[250764]: 
Nov 29 06:43:06 compute-0 nova_compute[250764]:   <guest>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     <os_type>hvm</os_type>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     <arch name='x86_64'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <wordsize>64</wordsize>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <domain type='qemu'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <domain type='kvm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     </arch>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     <features>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <acpi default='on' toggle='yes'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <apic default='on' toggle='no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <cpuselection/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <deviceboot/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <disksnapshot default='on' toggle='no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <externalSnapshot/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     </features>
Nov 29 06:43:06 compute-0 nova_compute[250764]:   </guest>
Nov 29 06:43:06 compute-0 nova_compute[250764]: 
Nov 29 06:43:06 compute-0 nova_compute[250764]: </capabilities>
Nov 29 06:43:06 compute-0 nova_compute[250764]: 
Nov 29 06:43:06 compute-0 nova_compute[250764]: 2025-11-29 06:43:06.204 250780 DEBUG nova.virt.libvirt.host [None req-0030f89b-8686-48a6-a1ec-9c3ff8f4b6a3 - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Nov 29 06:43:06 compute-0 nova_compute[250764]: 2025-11-29 06:43:06.229 250780 DEBUG nova.virt.libvirt.host [None req-0030f89b-8686-48a6-a1ec-9c3ff8f4b6a3 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Nov 29 06:43:06 compute-0 nova_compute[250764]: <domainCapabilities>
Nov 29 06:43:06 compute-0 nova_compute[250764]:   <path>/usr/libexec/qemu-kvm</path>
Nov 29 06:43:06 compute-0 nova_compute[250764]:   <domain>kvm</domain>
Nov 29 06:43:06 compute-0 nova_compute[250764]:   <machine>pc-q35-rhel9.8.0</machine>
Nov 29 06:43:06 compute-0 nova_compute[250764]:   <arch>i686</arch>
Nov 29 06:43:06 compute-0 nova_compute[250764]:   <vcpu max='4096'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:   <iothreads supported='yes'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:   <os supported='yes'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     <enum name='firmware'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     <loader supported='yes'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <enum name='type'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>rom</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>pflash</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </enum>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <enum name='readonly'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>yes</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>no</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </enum>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <enum name='secure'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>no</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </enum>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     </loader>
Nov 29 06:43:06 compute-0 nova_compute[250764]:   </os>
Nov 29 06:43:06 compute-0 nova_compute[250764]:   <cpu>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     <mode name='host-passthrough' supported='yes'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <enum name='hostPassthroughMigratable'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>on</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>off</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </enum>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     </mode>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     <mode name='maximum' supported='yes'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <enum name='maximumMigratable'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>on</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>off</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </enum>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     </mode>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     <mode name='host-model' supported='yes'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model fallback='forbid'>EPYC-Rome</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <vendor>AMD</vendor>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <maxphysaddr mode='passthrough' limit='40'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <feature policy='require' name='x2apic'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <feature policy='require' name='tsc-deadline'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <feature policy='require' name='hypervisor'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <feature policy='require' name='tsc_adjust'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <feature policy='require' name='spec-ctrl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <feature policy='require' name='stibp'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <feature policy='require' name='ssbd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <feature policy='require' name='cmp_legacy'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <feature policy='require' name='overflow-recov'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <feature policy='require' name='succor'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <feature policy='require' name='ibrs'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <feature policy='require' name='amd-ssbd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <feature policy='require' name='virt-ssbd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <feature policy='require' name='lbrv'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <feature policy='require' name='tsc-scale'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <feature policy='require' name='vmcb-clean'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <feature policy='require' name='flushbyasid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <feature policy='require' name='pause-filter'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <feature policy='require' name='pfthreshold'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <feature policy='require' name='svme-addr-chk'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <feature policy='disable' name='xsaves'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     </mode>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     <mode name='custom' supported='yes'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Broadwell'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='hle'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='rtm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Broadwell-IBRS'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='hle'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='rtm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Broadwell-noTSX'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Broadwell-noTSX-IBRS'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Broadwell-v1'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='hle'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='rtm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Broadwell-v2'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Broadwell-v3'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='hle'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='rtm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Broadwell-v4'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Cascadelake-Server'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='hle'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='rtm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='ibrs-all'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Cascadelake-Server-v1'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='hle'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='rtm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Cascadelake-Server-v2'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='hle'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='ibrs-all'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='rtm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Cascadelake-Server-v3'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='ibrs-all'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Cascadelake-Server-v4'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='ibrs-all'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Cascadelake-Server-v5'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='ibrs-all'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xsaves'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Cooperlake'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-bf16'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='hle'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='ibrs-all'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='rtm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='taa-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Cooperlake-v1'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-bf16'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='hle'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='ibrs-all'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='rtm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='taa-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Cooperlake-v2'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-bf16'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='hle'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='ibrs-all'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='rtm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='taa-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xsaves'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Denverton'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='mpx'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Denverton-v1'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='mpx'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Denverton-v2'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Denverton-v3'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xsaves'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Dhyana-v2'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xsaves'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='EPYC-Genoa'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='amd-psfd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='auto-ibrs'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-bf16'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bitalg'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512ifma'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fsrm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='gfni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='la57'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='no-nested-data-bp'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='null-sel-clr-base'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='stibp-always-on'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vaes'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xsaves'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='EPYC-Genoa-v1'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='amd-psfd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='auto-ibrs'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-bf16'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bitalg'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512ifma'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fsrm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='gfni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='la57'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='no-nested-data-bp'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='null-sel-clr-base'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='stibp-always-on'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vaes'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xsaves'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='EPYC-Milan'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fsrm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xsaves'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='EPYC-Milan-v1'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fsrm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xsaves'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='EPYC-Milan-v2'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='amd-psfd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fsrm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='no-nested-data-bp'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='null-sel-clr-base'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='stibp-always-on'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vaes'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xsaves'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='EPYC-Rome'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xsaves'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='EPYC-Rome-v1'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xsaves'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='EPYC-Rome-v2'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xsaves'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='EPYC-Rome-v3'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xsaves'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='EPYC-v3'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xsaves'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='EPYC-v4'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xsaves'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='GraniteRapids'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='amx-bf16'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='amx-fp16'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='amx-int8'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='amx-tile'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx-vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-bf16'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-fp16'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bitalg'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512ifma'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='bus-lock-detect'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fbsdp-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fsrc'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fsrm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fsrs'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fzrm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='gfni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='hle'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='ibrs-all'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='la57'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='mcdt-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pbrsb-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='prefetchiti'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='psdp-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='rtm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='sbdr-ssdp-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='serialize'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='taa-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='tsx-ldtrk'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vaes'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xfd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xsaves'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='GraniteRapids-v1'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='amx-bf16'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='amx-fp16'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='amx-int8'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='amx-tile'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx-vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-bf16'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-fp16'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bitalg'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512ifma'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='bus-lock-detect'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fbsdp-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fsrc'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fsrm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fsrs'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fzrm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='gfni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='hle'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='ibrs-all'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='la57'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='mcdt-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pbrsb-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='prefetchiti'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='psdp-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='rtm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='sbdr-ssdp-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='serialize'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='taa-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='tsx-ldtrk'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vaes'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xfd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xsaves'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='GraniteRapids-v2'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='amx-bf16'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='amx-fp16'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='amx-int8'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='amx-tile'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx-vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx10'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx10-128'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx10-256'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx10-512'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-bf16'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-fp16'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bitalg'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512ifma'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='bus-lock-detect'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='cldemote'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fbsdp-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fsrc'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fsrm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fsrs'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fzrm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='gfni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='hle'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='ibrs-all'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='la57'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='mcdt-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='movdir64b'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='movdiri'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pbrsb-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='prefetchiti'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='psdp-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='rtm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='sbdr-ssdp-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='serialize'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='ss'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='taa-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='tsx-ldtrk'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vaes'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xfd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xsaves'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Haswell'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='hle'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='rtm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Haswell-IBRS'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='hle'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='rtm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Haswell-noTSX'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Haswell-noTSX-IBRS'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Haswell-v1'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='hle'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='rtm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Haswell-v2'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Haswell-v3'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='hle'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='rtm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Haswell-v4'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Icelake-Server'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bitalg'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='gfni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='hle'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='la57'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='rtm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vaes'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Icelake-Server-noTSX'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bitalg'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='gfni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='la57'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vaes'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Icelake-Server-v1'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bitalg'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='gfni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='hle'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='la57'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='rtm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vaes'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Icelake-Server-v2'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bitalg'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='gfni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='la57'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vaes'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Icelake-Server-v3'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bitalg'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='gfni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='ibrs-all'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='la57'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='taa-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vaes'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Icelake-Server-v4'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bitalg'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512ifma'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fsrm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='gfni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='ibrs-all'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='la57'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='taa-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vaes'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Icelake-Server-v5'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bitalg'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512ifma'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fsrm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='gfni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='ibrs-all'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='la57'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='taa-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vaes'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xsaves'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Icelake-Server-v6'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bitalg'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512ifma'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fsrm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='gfni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='ibrs-all'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='la57'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='taa-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vaes'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xsaves'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Icelake-Server-v7'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bitalg'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512ifma'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fsrm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='gfni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='hle'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='ibrs-all'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='la57'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='rtm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='taa-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vaes'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xsaves'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='IvyBridge'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='IvyBridge-IBRS'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='IvyBridge-v1'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='IvyBridge-v2'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='KnightsMill'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-4fmaps'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-4vnniw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512er'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512pf'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='ss'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='KnightsMill-v1'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-4fmaps'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-4vnniw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512er'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512pf'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='ss'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Opteron_G4'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fma4'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xop'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Opteron_G4-v1'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fma4'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xop'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Opteron_G5'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fma4'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='tbm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xop'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Opteron_G5-v1'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fma4'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='tbm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xop'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='SapphireRapids'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='amx-bf16'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='amx-int8'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='amx-tile'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx-vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-bf16'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-fp16'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bitalg'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512ifma'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='bus-lock-detect'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fsrc'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fsrm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fsrs'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fzrm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='gfni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='hle'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='ibrs-all'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='la57'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='rtm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='serialize'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='taa-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='tsx-ldtrk'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vaes'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xfd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xsaves'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='SapphireRapids-v1'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='amx-bf16'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='amx-int8'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='amx-tile'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx-vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-bf16'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-fp16'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bitalg'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512ifma'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='bus-lock-detect'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fsrc'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fsrm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fsrs'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fzrm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='gfni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='hle'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='ibrs-all'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='la57'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='rtm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='serialize'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='taa-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='tsx-ldtrk'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vaes'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xfd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xsaves'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='SapphireRapids-v2'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='amx-bf16'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='amx-int8'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='amx-tile'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx-vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-bf16'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-fp16'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bitalg'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512ifma'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='bus-lock-detect'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fbsdp-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fsrc'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fsrm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fsrs'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fzrm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='gfni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='hle'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='ibrs-all'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='la57'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='psdp-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='rtm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='sbdr-ssdp-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='serialize'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='taa-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='tsx-ldtrk'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vaes'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xfd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xsaves'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='SapphireRapids-v3'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='amx-bf16'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='amx-int8'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='amx-tile'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx-vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-bf16'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-fp16'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bitalg'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512ifma'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='bus-lock-detect'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='cldemote'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fbsdp-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fsrc'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fsrm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fsrs'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fzrm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='gfni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='hle'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='ibrs-all'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='la57'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='movdir64b'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='movdiri'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='psdp-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='rtm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='sbdr-ssdp-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='serialize'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='ss'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='taa-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='tsx-ldtrk'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vaes'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xfd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xsaves'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='SierraForest'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx-ifma'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx-ne-convert'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx-vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx-vnni-int8'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='bus-lock-detect'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='cmpccxadd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fbsdp-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fsrm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fsrs'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='gfni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='ibrs-all'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='mcdt-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pbrsb-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='psdp-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='sbdr-ssdp-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='serialize'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vaes'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xsaves'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='SierraForest-v1'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx-ifma'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx-ne-convert'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx-vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx-vnni-int8'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='bus-lock-detect'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='cmpccxadd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fbsdp-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fsrm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fsrs'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='gfni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='ibrs-all'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='mcdt-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pbrsb-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='psdp-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='sbdr-ssdp-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='serialize'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vaes'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xsaves'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Skylake-Client'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='hle'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='rtm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Skylake-Client-IBRS'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='hle'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='rtm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Skylake-Client-v1'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='hle'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='rtm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Skylake-Client-v2'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='hle'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='rtm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Skylake-Client-v3'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Skylake-Client-v4'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xsaves'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Skylake-Server'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='hle'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='rtm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Skylake-Server-IBRS'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='hle'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='rtm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Skylake-Server-v1'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='hle'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='rtm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Skylake-Server-v2'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='hle'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='rtm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Skylake-Server-v3'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Skylake-Server-v4'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Skylake-Server-v5'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xsaves'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Snowridge'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='cldemote'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='core-capability'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='gfni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='movdir64b'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='movdiri'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='mpx'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='split-lock-detect'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Snowridge-v1'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='cldemote'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='core-capability'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='gfni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='movdir64b'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='movdiri'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='mpx'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='split-lock-detect'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Snowridge-v2'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='cldemote'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='core-capability'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='gfni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='movdir64b'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='movdiri'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='split-lock-detect'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Snowridge-v3'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='cldemote'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='core-capability'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='gfni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='movdir64b'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='movdiri'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='split-lock-detect'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xsaves'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Snowridge-v4'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='cldemote'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='gfni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='movdir64b'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='movdiri'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xsaves'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='athlon'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='3dnow'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='3dnowext'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='athlon-v1'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='3dnow'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='3dnowext'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='core2duo'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='ss'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='core2duo-v1'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='ss'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='coreduo'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='ss'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='coreduo-v1'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='ss'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='n270'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='ss'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='n270-v1'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='ss'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='phenom'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='3dnow'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='3dnowext'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='phenom-v1'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='3dnow'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='3dnowext'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     </mode>
Nov 29 06:43:06 compute-0 nova_compute[250764]:   </cpu>
Nov 29 06:43:06 compute-0 nova_compute[250764]:   <memoryBacking supported='yes'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     <enum name='sourceType'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <value>file</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <value>anonymous</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <value>memfd</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     </enum>
Nov 29 06:43:06 compute-0 nova_compute[250764]:   </memoryBacking>
Nov 29 06:43:06 compute-0 nova_compute[250764]:   <devices>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     <disk supported='yes'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <enum name='diskDevice'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>disk</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>cdrom</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>floppy</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>lun</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </enum>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <enum name='bus'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>fdc</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>scsi</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>virtio</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>usb</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>sata</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </enum>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <enum name='model'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>virtio</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>virtio-transitional</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>virtio-non-transitional</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </enum>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     </disk>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     <graphics supported='yes'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <enum name='type'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>vnc</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>egl-headless</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>dbus</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </enum>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     </graphics>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     <video supported='yes'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <enum name='modelType'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>vga</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>cirrus</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>virtio</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>none</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>bochs</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>ramfb</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </enum>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     </video>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     <hostdev supported='yes'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <enum name='mode'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>subsystem</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </enum>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <enum name='startupPolicy'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>default</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>mandatory</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>requisite</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>optional</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </enum>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <enum name='subsysType'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>usb</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>pci</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>scsi</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </enum>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <enum name='capsType'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <enum name='pciBackend'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     </hostdev>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     <rng supported='yes'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <enum name='model'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>virtio</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>virtio-transitional</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>virtio-non-transitional</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </enum>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <enum name='backendModel'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>random</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>egd</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>builtin</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </enum>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     </rng>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     <filesystem supported='yes'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <enum name='driverType'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>path</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>handle</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>virtiofs</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </enum>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     </filesystem>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     <tpm supported='yes'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <enum name='model'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>tpm-tis</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>tpm-crb</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </enum>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <enum name='backendModel'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>emulator</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>external</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </enum>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <enum name='backendVersion'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>2.0</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </enum>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     </tpm>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     <redirdev supported='yes'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <enum name='bus'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>usb</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </enum>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     </redirdev>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     <channel supported='yes'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <enum name='type'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>pty</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>unix</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </enum>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     </channel>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     <crypto supported='yes'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <enum name='model'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <enum name='type'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>qemu</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </enum>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <enum name='backendModel'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>builtin</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </enum>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     </crypto>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     <interface supported='yes'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <enum name='backendType'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>default</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>passt</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </enum>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     </interface>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     <panic supported='yes'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <enum name='model'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>isa</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>hyperv</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </enum>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     </panic>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     <console supported='yes'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <enum name='type'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>null</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>vc</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>pty</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>dev</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>file</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>pipe</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>stdio</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>udp</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>tcp</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>unix</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>qemu-vdagent</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>dbus</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </enum>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     </console>
Nov 29 06:43:06 compute-0 nova_compute[250764]:   </devices>
Nov 29 06:43:06 compute-0 nova_compute[250764]:   <features>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     <gic supported='no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     <vmcoreinfo supported='yes'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     <genid supported='yes'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     <backingStoreInput supported='yes'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     <backup supported='yes'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     <async-teardown supported='yes'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     <ps2 supported='yes'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     <sev supported='no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     <sgx supported='no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     <hyperv supported='yes'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <enum name='features'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>relaxed</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>vapic</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>spinlocks</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>vpindex</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>runtime</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>synic</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>stimer</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>reset</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>vendor_id</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>frequencies</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>reenlightenment</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>tlbflush</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>ipi</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>avic</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>emsr_bitmap</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>xmm_input</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </enum>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <defaults>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <spinlocks>4095</spinlocks>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <stimer_direct>on</stimer_direct>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <tlbflush_direct>on</tlbflush_direct>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <tlbflush_extended>on</tlbflush_extended>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </defaults>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     </hyperv>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     <launchSecurity supported='yes'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <enum name='sectype'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>tdx</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </enum>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     </launchSecurity>
Nov 29 06:43:06 compute-0 nova_compute[250764]:   </features>
Nov 29 06:43:06 compute-0 nova_compute[250764]: </domainCapabilities>
Nov 29 06:43:06 compute-0 nova_compute[250764]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 29 06:43:06 compute-0 nova_compute[250764]: 2025-11-29 06:43:06.238 250780 DEBUG nova.virt.libvirt.host [None req-0030f89b-8686-48a6-a1ec-9c3ff8f4b6a3 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Nov 29 06:43:06 compute-0 nova_compute[250764]: <domainCapabilities>
Nov 29 06:43:06 compute-0 nova_compute[250764]:   <path>/usr/libexec/qemu-kvm</path>
Nov 29 06:43:06 compute-0 nova_compute[250764]:   <domain>kvm</domain>
Nov 29 06:43:06 compute-0 nova_compute[250764]:   <machine>pc-i440fx-rhel7.6.0</machine>
Nov 29 06:43:06 compute-0 nova_compute[250764]:   <arch>i686</arch>
Nov 29 06:43:06 compute-0 nova_compute[250764]:   <vcpu max='240'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:   <iothreads supported='yes'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:   <os supported='yes'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     <enum name='firmware'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     <loader supported='yes'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <enum name='type'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>rom</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>pflash</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </enum>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <enum name='readonly'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>yes</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>no</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </enum>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <enum name='secure'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>no</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </enum>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     </loader>
Nov 29 06:43:06 compute-0 nova_compute[250764]:   </os>
Nov 29 06:43:06 compute-0 nova_compute[250764]:   <cpu>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     <mode name='host-passthrough' supported='yes'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <enum name='hostPassthroughMigratable'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>on</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>off</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </enum>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     </mode>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     <mode name='maximum' supported='yes'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <enum name='maximumMigratable'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>on</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>off</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </enum>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     </mode>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     <mode name='host-model' supported='yes'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model fallback='forbid'>EPYC-Rome</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <vendor>AMD</vendor>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <maxphysaddr mode='passthrough' limit='40'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <feature policy='require' name='x2apic'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <feature policy='require' name='tsc-deadline'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <feature policy='require' name='hypervisor'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <feature policy='require' name='tsc_adjust'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <feature policy='require' name='spec-ctrl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <feature policy='require' name='stibp'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <feature policy='require' name='ssbd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <feature policy='require' name='cmp_legacy'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <feature policy='require' name='overflow-recov'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <feature policy='require' name='succor'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <feature policy='require' name='ibrs'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <feature policy='require' name='amd-ssbd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <feature policy='require' name='virt-ssbd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <feature policy='require' name='lbrv'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <feature policy='require' name='tsc-scale'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <feature policy='require' name='vmcb-clean'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <feature policy='require' name='flushbyasid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <feature policy='require' name='pause-filter'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <feature policy='require' name='pfthreshold'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <feature policy='require' name='svme-addr-chk'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <feature policy='disable' name='xsaves'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     </mode>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     <mode name='custom' supported='yes'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Broadwell'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='hle'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='rtm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Broadwell-IBRS'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='hle'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='rtm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Broadwell-noTSX'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Broadwell-noTSX-IBRS'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Broadwell-v1'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='hle'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='rtm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Broadwell-v2'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Broadwell-v3'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='hle'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='rtm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Broadwell-v4'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Cascadelake-Server'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='hle'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='rtm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='ibrs-all'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Cascadelake-Server-v1'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='hle'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='rtm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Cascadelake-Server-v2'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='hle'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='ibrs-all'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='rtm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Cascadelake-Server-v3'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='ibrs-all'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Cascadelake-Server-v4'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='ibrs-all'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Cascadelake-Server-v5'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='ibrs-all'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xsaves'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Cooperlake'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-bf16'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='hle'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='ibrs-all'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='rtm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='taa-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Cooperlake-v1'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-bf16'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='hle'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='ibrs-all'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='rtm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='taa-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Cooperlake-v2'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-bf16'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='hle'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='ibrs-all'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='rtm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='taa-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xsaves'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Denverton'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='mpx'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Denverton-v1'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='mpx'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Denverton-v2'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Denverton-v3'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xsaves'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Dhyana-v2'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xsaves'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='EPYC-Genoa'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='amd-psfd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='auto-ibrs'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-bf16'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bitalg'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512ifma'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fsrm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='gfni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='la57'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='no-nested-data-bp'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='null-sel-clr-base'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='stibp-always-on'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vaes'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xsaves'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='EPYC-Genoa-v1'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='amd-psfd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='auto-ibrs'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-bf16'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bitalg'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512ifma'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fsrm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='gfni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='la57'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='no-nested-data-bp'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='null-sel-clr-base'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='stibp-always-on'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vaes'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xsaves'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='EPYC-Milan'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fsrm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xsaves'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='EPYC-Milan-v1'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fsrm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xsaves'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='EPYC-Milan-v2'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='amd-psfd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fsrm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='no-nested-data-bp'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='null-sel-clr-base'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='stibp-always-on'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vaes'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xsaves'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='EPYC-Rome'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xsaves'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='EPYC-Rome-v1'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xsaves'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='EPYC-Rome-v2'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xsaves'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='EPYC-Rome-v3'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xsaves'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='EPYC-v3'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xsaves'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='EPYC-v4'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xsaves'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='GraniteRapids'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='amx-bf16'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='amx-fp16'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='amx-int8'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='amx-tile'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx-vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-bf16'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-fp16'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bitalg'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512ifma'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='bus-lock-detect'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fbsdp-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fsrc'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fsrm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fsrs'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fzrm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='gfni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='hle'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='ibrs-all'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='la57'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='mcdt-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pbrsb-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='prefetchiti'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='psdp-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='rtm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='sbdr-ssdp-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='serialize'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='taa-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='tsx-ldtrk'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vaes'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xfd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xsaves'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='GraniteRapids-v1'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='amx-bf16'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='amx-fp16'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='amx-int8'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='amx-tile'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx-vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-bf16'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-fp16'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bitalg'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512ifma'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='bus-lock-detect'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fbsdp-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fsrc'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fsrm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fsrs'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fzrm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='gfni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='hle'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='ibrs-all'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='la57'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='mcdt-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pbrsb-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='prefetchiti'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='psdp-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='rtm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='sbdr-ssdp-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='serialize'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='taa-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='tsx-ldtrk'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vaes'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xfd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xsaves'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='GraniteRapids-v2'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='amx-bf16'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='amx-fp16'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='amx-int8'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='amx-tile'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx-vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx10'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx10-128'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx10-256'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx10-512'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-bf16'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-fp16'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bitalg'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512ifma'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='bus-lock-detect'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='cldemote'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fbsdp-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fsrc'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fsrm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fsrs'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fzrm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='gfni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='hle'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='ibrs-all'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='la57'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='mcdt-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='movdir64b'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='movdiri'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pbrsb-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='prefetchiti'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='psdp-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='rtm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='sbdr-ssdp-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='serialize'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='ss'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='taa-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='tsx-ldtrk'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vaes'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xfd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xsaves'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Haswell'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='hle'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='rtm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Haswell-IBRS'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='hle'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='rtm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Haswell-noTSX'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Haswell-noTSX-IBRS'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Haswell-v1'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='hle'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='rtm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Haswell-v2'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Haswell-v3'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='hle'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='rtm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Haswell-v4'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Icelake-Server'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bitalg'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='gfni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='hle'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='la57'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='rtm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vaes'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Icelake-Server-noTSX'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bitalg'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='gfni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='la57'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vaes'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Icelake-Server-v1'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bitalg'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='gfni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='hle'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='la57'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='rtm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vaes'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:06 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:43:06.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Icelake-Server-v2'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bitalg'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='gfni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='la57'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vaes'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Icelake-Server-v3'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bitalg'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='gfni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='ibrs-all'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='la57'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='taa-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vaes'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Icelake-Server-v4'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bitalg'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512ifma'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fsrm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='gfni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='ibrs-all'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='la57'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='taa-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vaes'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Icelake-Server-v5'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bitalg'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512ifma'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fsrm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='gfni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='ibrs-all'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='la57'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='taa-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vaes'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xsaves'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Icelake-Server-v6'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bitalg'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512ifma'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fsrm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='gfni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='ibrs-all'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='la57'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='taa-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vaes'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xsaves'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Icelake-Server-v7'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bitalg'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512ifma'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fsrm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='gfni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='hle'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='ibrs-all'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='la57'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='rtm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='taa-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vaes'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xsaves'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='IvyBridge'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='IvyBridge-IBRS'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='IvyBridge-v1'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='IvyBridge-v2'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='KnightsMill'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-4fmaps'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-4vnniw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512er'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512pf'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='ss'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='KnightsMill-v1'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-4fmaps'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-4vnniw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512er'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512pf'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='ss'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Opteron_G4'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fma4'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xop'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Opteron_G4-v1'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fma4'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xop'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Opteron_G5'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fma4'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='tbm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xop'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Opteron_G5-v1'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fma4'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='tbm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xop'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='SapphireRapids'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='amx-bf16'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='amx-int8'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='amx-tile'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx-vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-bf16'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-fp16'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bitalg'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512ifma'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='bus-lock-detect'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fsrc'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fsrm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fsrs'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fzrm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='gfni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='hle'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='ibrs-all'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='la57'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='rtm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='serialize'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='taa-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='tsx-ldtrk'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vaes'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xfd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xsaves'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='SapphireRapids-v1'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='amx-bf16'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='amx-int8'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='amx-tile'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx-vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-bf16'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-fp16'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bitalg'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512ifma'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='bus-lock-detect'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fsrc'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fsrm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fsrs'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fzrm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='gfni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='hle'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='ibrs-all'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='la57'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='rtm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='serialize'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='taa-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='tsx-ldtrk'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vaes'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xfd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xsaves'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='SapphireRapids-v2'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='amx-bf16'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='amx-int8'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='amx-tile'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx-vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-bf16'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-fp16'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bitalg'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512ifma'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='bus-lock-detect'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fbsdp-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fsrc'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fsrm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fsrs'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fzrm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='gfni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='hle'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='ibrs-all'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='la57'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='psdp-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='rtm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='sbdr-ssdp-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='serialize'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='taa-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='tsx-ldtrk'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vaes'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xfd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xsaves'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='SapphireRapids-v3'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='amx-bf16'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='amx-int8'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='amx-tile'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx-vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-bf16'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-fp16'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bitalg'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512ifma'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='bus-lock-detect'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='cldemote'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fbsdp-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fsrc'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fsrm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fsrs'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fzrm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='gfni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='hle'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='ibrs-all'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='la57'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='movdir64b'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='movdiri'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='psdp-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='rtm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='sbdr-ssdp-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='serialize'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='ss'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='taa-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='tsx-ldtrk'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vaes'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xfd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xsaves'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='SierraForest'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx-ifma'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx-ne-convert'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx-vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx-vnni-int8'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='bus-lock-detect'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='cmpccxadd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fbsdp-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fsrm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fsrs'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='gfni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='ibrs-all'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='mcdt-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pbrsb-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='psdp-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='sbdr-ssdp-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='serialize'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vaes'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xsaves'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='SierraForest-v1'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx-ifma'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx-ne-convert'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx-vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx-vnni-int8'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='bus-lock-detect'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='cmpccxadd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fbsdp-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fsrm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fsrs'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='gfni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='ibrs-all'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='mcdt-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pbrsb-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='psdp-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='sbdr-ssdp-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='serialize'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vaes'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xsaves'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Skylake-Client'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='hle'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='rtm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Skylake-Client-IBRS'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='hle'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='rtm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Skylake-Client-v1'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='hle'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='rtm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Skylake-Client-v2'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='hle'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='rtm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Skylake-Client-v3'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Skylake-Client-v4'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xsaves'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Skylake-Server'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='hle'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='rtm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Skylake-Server-IBRS'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='hle'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='rtm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Skylake-Server-v1'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='hle'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='rtm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Skylake-Server-v2'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='hle'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='rtm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Skylake-Server-v3'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Skylake-Server-v4'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Skylake-Server-v5'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xsaves'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Snowridge'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='cldemote'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='core-capability'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='gfni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='movdir64b'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='movdiri'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='mpx'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='split-lock-detect'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Snowridge-v1'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='cldemote'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='core-capability'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='gfni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='movdir64b'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='movdiri'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='mpx'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='split-lock-detect'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Snowridge-v2'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='cldemote'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='core-capability'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='gfni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='movdir64b'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='movdiri'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='split-lock-detect'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Snowridge-v3'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='cldemote'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='core-capability'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='gfni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='movdir64b'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='movdiri'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='split-lock-detect'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xsaves'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Snowridge-v4'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='cldemote'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='gfni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='movdir64b'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='movdiri'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xsaves'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='athlon'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='3dnow'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='3dnowext'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='athlon-v1'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='3dnow'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='3dnowext'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='core2duo'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='ss'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='core2duo-v1'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='ss'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='coreduo'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='ss'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='coreduo-v1'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='ss'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='n270'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='ss'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='n270-v1'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='ss'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='phenom'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='3dnow'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='3dnowext'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='phenom-v1'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='3dnow'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='3dnowext'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     </mode>
Nov 29 06:43:06 compute-0 nova_compute[250764]:   </cpu>
Nov 29 06:43:06 compute-0 nova_compute[250764]:   <memoryBacking supported='yes'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     <enum name='sourceType'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <value>file</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <value>anonymous</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <value>memfd</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     </enum>
Nov 29 06:43:06 compute-0 nova_compute[250764]:   </memoryBacking>
Nov 29 06:43:06 compute-0 nova_compute[250764]:   <devices>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     <disk supported='yes'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <enum name='diskDevice'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>disk</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>cdrom</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>floppy</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>lun</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </enum>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <enum name='bus'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>ide</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>fdc</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>scsi</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>virtio</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>usb</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>sata</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </enum>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <enum name='model'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>virtio</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>virtio-transitional</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>virtio-non-transitional</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </enum>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     </disk>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     <graphics supported='yes'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <enum name='type'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>vnc</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>egl-headless</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>dbus</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </enum>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     </graphics>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     <video supported='yes'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <enum name='modelType'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>vga</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>cirrus</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>virtio</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>none</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>bochs</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>ramfb</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </enum>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     </video>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     <hostdev supported='yes'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <enum name='mode'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>subsystem</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </enum>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <enum name='startupPolicy'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>default</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>mandatory</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>requisite</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>optional</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </enum>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <enum name='subsysType'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>usb</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>pci</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>scsi</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </enum>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <enum name='capsType'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <enum name='pciBackend'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     </hostdev>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     <rng supported='yes'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <enum name='model'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>virtio</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>virtio-transitional</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>virtio-non-transitional</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </enum>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <enum name='backendModel'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>random</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>egd</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>builtin</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </enum>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     </rng>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     <filesystem supported='yes'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <enum name='driverType'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>path</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>handle</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>virtiofs</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </enum>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     </filesystem>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     <tpm supported='yes'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <enum name='model'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>tpm-tis</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>tpm-crb</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </enum>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <enum name='backendModel'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>emulator</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>external</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </enum>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <enum name='backendVersion'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>2.0</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </enum>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     </tpm>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     <redirdev supported='yes'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <enum name='bus'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>usb</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </enum>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     </redirdev>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     <channel supported='yes'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <enum name='type'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>pty</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>unix</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </enum>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     </channel>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     <crypto supported='yes'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <enum name='model'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <enum name='type'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>qemu</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </enum>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <enum name='backendModel'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>builtin</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </enum>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     </crypto>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     <interface supported='yes'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <enum name='backendType'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>default</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>passt</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </enum>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     </interface>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     <panic supported='yes'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <enum name='model'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>isa</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>hyperv</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </enum>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     </panic>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     <console supported='yes'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <enum name='type'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>null</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>vc</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>pty</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>dev</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>file</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>pipe</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>stdio</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>udp</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>tcp</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>unix</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>qemu-vdagent</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>dbus</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </enum>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     </console>
Nov 29 06:43:06 compute-0 nova_compute[250764]:   </devices>
Nov 29 06:43:06 compute-0 nova_compute[250764]:   <features>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     <gic supported='no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     <vmcoreinfo supported='yes'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     <genid supported='yes'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     <backingStoreInput supported='yes'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     <backup supported='yes'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     <async-teardown supported='yes'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     <ps2 supported='yes'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     <sev supported='no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     <sgx supported='no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     <hyperv supported='yes'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <enum name='features'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>relaxed</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>vapic</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>spinlocks</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>vpindex</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>runtime</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>synic</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>stimer</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>reset</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>vendor_id</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>frequencies</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>reenlightenment</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>tlbflush</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>ipi</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>avic</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>emsr_bitmap</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>xmm_input</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </enum>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <defaults>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <spinlocks>4095</spinlocks>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <stimer_direct>on</stimer_direct>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <tlbflush_direct>on</tlbflush_direct>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <tlbflush_extended>on</tlbflush_extended>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </defaults>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     </hyperv>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     <launchSecurity supported='yes'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <enum name='sectype'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>tdx</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </enum>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     </launchSecurity>
Nov 29 06:43:06 compute-0 nova_compute[250764]:   </features>
Nov 29 06:43:06 compute-0 nova_compute[250764]: </domainCapabilities>
Nov 29 06:43:06 compute-0 nova_compute[250764]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 29 06:43:06 compute-0 nova_compute[250764]: 2025-11-29 06:43:06.272 250780 DEBUG nova.virt.libvirt.host [None req-0030f89b-8686-48a6-a1ec-9c3ff8f4b6a3 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Nov 29 06:43:06 compute-0 nova_compute[250764]: 2025-11-29 06:43:06.275 250780 DEBUG nova.virt.libvirt.host [None req-0030f89b-8686-48a6-a1ec-9c3ff8f4b6a3 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Nov 29 06:43:06 compute-0 nova_compute[250764]: <domainCapabilities>
Nov 29 06:43:06 compute-0 nova_compute[250764]:   <path>/usr/libexec/qemu-kvm</path>
Nov 29 06:43:06 compute-0 nova_compute[250764]:   <domain>kvm</domain>
Nov 29 06:43:06 compute-0 nova_compute[250764]:   <machine>pc-q35-rhel9.8.0</machine>
Nov 29 06:43:06 compute-0 nova_compute[250764]:   <arch>x86_64</arch>
Nov 29 06:43:06 compute-0 nova_compute[250764]:   <vcpu max='4096'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:   <iothreads supported='yes'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:   <os supported='yes'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     <enum name='firmware'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <value>efi</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     </enum>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     <loader supported='yes'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <enum name='type'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>rom</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>pflash</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </enum>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <enum name='readonly'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>yes</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>no</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </enum>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <enum name='secure'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>yes</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>no</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </enum>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     </loader>
Nov 29 06:43:06 compute-0 nova_compute[250764]:   </os>
Nov 29 06:43:06 compute-0 nova_compute[250764]:   <cpu>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     <mode name='host-passthrough' supported='yes'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <enum name='hostPassthroughMigratable'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>on</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>off</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </enum>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     </mode>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     <mode name='maximum' supported='yes'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <enum name='maximumMigratable'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>on</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>off</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </enum>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     </mode>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     <mode name='host-model' supported='yes'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model fallback='forbid'>EPYC-Rome</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <vendor>AMD</vendor>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <maxphysaddr mode='passthrough' limit='40'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <feature policy='require' name='x2apic'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <feature policy='require' name='tsc-deadline'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <feature policy='require' name='hypervisor'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <feature policy='require' name='tsc_adjust'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <feature policy='require' name='spec-ctrl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <feature policy='require' name='stibp'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <feature policy='require' name='ssbd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <feature policy='require' name='cmp_legacy'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <feature policy='require' name='overflow-recov'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <feature policy='require' name='succor'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <feature policy='require' name='ibrs'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <feature policy='require' name='amd-ssbd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <feature policy='require' name='virt-ssbd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <feature policy='require' name='lbrv'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <feature policy='require' name='tsc-scale'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <feature policy='require' name='vmcb-clean'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <feature policy='require' name='flushbyasid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <feature policy='require' name='pause-filter'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <feature policy='require' name='pfthreshold'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <feature policy='require' name='svme-addr-chk'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <feature policy='disable' name='xsaves'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     </mode>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     <mode name='custom' supported='yes'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Broadwell'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='hle'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='rtm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Broadwell-IBRS'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='hle'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='rtm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Broadwell-noTSX'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Broadwell-noTSX-IBRS'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Broadwell-v1'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='hle'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='rtm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Broadwell-v2'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Broadwell-v3'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='hle'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='rtm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Broadwell-v4'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Cascadelake-Server'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='hle'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='rtm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='ibrs-all'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Cascadelake-Server-v1'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='hle'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='rtm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Cascadelake-Server-v2'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='hle'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='ibrs-all'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='rtm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Cascadelake-Server-v3'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='ibrs-all'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Cascadelake-Server-v4'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='ibrs-all'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Cascadelake-Server-v5'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='ibrs-all'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xsaves'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Cooperlake'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-bf16'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='hle'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='ibrs-all'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='rtm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='taa-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Cooperlake-v1'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-bf16'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='hle'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='ibrs-all'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='rtm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='taa-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Cooperlake-v2'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-bf16'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='hle'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='ibrs-all'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='rtm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='taa-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xsaves'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Denverton'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='mpx'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Denverton-v1'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='mpx'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Denverton-v2'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Denverton-v3'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xsaves'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Dhyana-v2'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xsaves'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='EPYC-Genoa'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='amd-psfd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='auto-ibrs'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-bf16'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bitalg'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512ifma'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fsrm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='gfni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='la57'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='no-nested-data-bp'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='null-sel-clr-base'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='stibp-always-on'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vaes'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xsaves'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='EPYC-Genoa-v1'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='amd-psfd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='auto-ibrs'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-bf16'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bitalg'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512ifma'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fsrm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='gfni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='la57'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='no-nested-data-bp'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='null-sel-clr-base'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='stibp-always-on'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vaes'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xsaves'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='EPYC-Milan'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fsrm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xsaves'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='EPYC-Milan-v1'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fsrm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xsaves'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='EPYC-Milan-v2'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='amd-psfd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fsrm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='no-nested-data-bp'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='null-sel-clr-base'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='stibp-always-on'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vaes'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xsaves'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='EPYC-Rome'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xsaves'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='EPYC-Rome-v1'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xsaves'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='EPYC-Rome-v2'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xsaves'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='EPYC-Rome-v3'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xsaves'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='EPYC-v3'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xsaves'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='EPYC-v4'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xsaves'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='GraniteRapids'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='amx-bf16'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='amx-fp16'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='amx-int8'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='amx-tile'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx-vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-bf16'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-fp16'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bitalg'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512ifma'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='bus-lock-detect'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fbsdp-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fsrc'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fsrm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fsrs'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fzrm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='gfni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='hle'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='ibrs-all'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='la57'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='mcdt-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pbrsb-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='prefetchiti'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='psdp-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='rtm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='sbdr-ssdp-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='serialize'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='taa-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='tsx-ldtrk'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vaes'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xfd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xsaves'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='GraniteRapids-v1'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='amx-bf16'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='amx-fp16'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='amx-int8'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='amx-tile'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx-vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-bf16'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-fp16'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bitalg'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512ifma'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='bus-lock-detect'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fbsdp-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fsrc'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fsrm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fsrs'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fzrm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='gfni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='hle'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='ibrs-all'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='la57'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='mcdt-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pbrsb-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='prefetchiti'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='psdp-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='rtm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='sbdr-ssdp-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='serialize'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='taa-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='tsx-ldtrk'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vaes'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xfd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xsaves'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='GraniteRapids-v2'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='amx-bf16'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='amx-fp16'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='amx-int8'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='amx-tile'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx-vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx10'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx10-128'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx10-256'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx10-512'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-bf16'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-fp16'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bitalg'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512ifma'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='bus-lock-detect'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='cldemote'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fbsdp-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fsrc'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fsrm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fsrs'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fzrm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='gfni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='hle'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='ibrs-all'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='la57'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='mcdt-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='movdir64b'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='movdiri'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pbrsb-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='prefetchiti'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='psdp-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='rtm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='sbdr-ssdp-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='serialize'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='ss'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='taa-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='tsx-ldtrk'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vaes'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xfd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xsaves'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Haswell'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='hle'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='rtm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Haswell-IBRS'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='hle'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='rtm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Haswell-noTSX'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Haswell-noTSX-IBRS'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Haswell-v1'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='hle'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='rtm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Haswell-v2'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Haswell-v3'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='hle'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='rtm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Haswell-v4'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Icelake-Server'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bitalg'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='gfni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='hle'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='la57'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='rtm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vaes'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Icelake-Server-noTSX'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bitalg'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='gfni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='la57'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vaes'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Icelake-Server-v1'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bitalg'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='gfni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='hle'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='la57'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='rtm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vaes'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Icelake-Server-v2'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bitalg'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='gfni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='la57'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vaes'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Icelake-Server-v3'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bitalg'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='gfni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='ibrs-all'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='la57'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='taa-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vaes'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Icelake-Server-v4'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bitalg'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512ifma'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fsrm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='gfni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='ibrs-all'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='la57'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='taa-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vaes'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Icelake-Server-v5'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bitalg'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512ifma'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fsrm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='gfni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='ibrs-all'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='la57'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='taa-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vaes'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xsaves'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Icelake-Server-v6'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bitalg'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512ifma'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fsrm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='gfni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='ibrs-all'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='la57'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='taa-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vaes'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xsaves'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Icelake-Server-v7'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bitalg'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512ifma'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fsrm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='gfni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='hle'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='ibrs-all'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='la57'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='rtm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='taa-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vaes'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xsaves'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='IvyBridge'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='IvyBridge-IBRS'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='IvyBridge-v1'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='IvyBridge-v2'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='KnightsMill'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-4fmaps'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-4vnniw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512er'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512pf'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='ss'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='KnightsMill-v1'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-4fmaps'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-4vnniw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512er'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512pf'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='ss'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Opteron_G4'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fma4'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xop'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Opteron_G4-v1'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fma4'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xop'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Opteron_G5'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fma4'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='tbm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xop'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Opteron_G5-v1'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fma4'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='tbm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xop'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='SapphireRapids'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='amx-bf16'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='amx-int8'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='amx-tile'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx-vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-bf16'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-fp16'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bitalg'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512ifma'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='bus-lock-detect'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fsrc'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fsrm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fsrs'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fzrm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='gfni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='hle'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='ibrs-all'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='la57'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='rtm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='serialize'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='taa-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='tsx-ldtrk'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vaes'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xfd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xsaves'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='SapphireRapids-v1'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='amx-bf16'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='amx-int8'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='amx-tile'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx-vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-bf16'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-fp16'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bitalg'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512ifma'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='bus-lock-detect'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fsrc'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fsrm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fsrs'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fzrm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='gfni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='hle'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='ibrs-all'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='la57'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='rtm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='serialize'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='taa-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='tsx-ldtrk'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vaes'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xfd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xsaves'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='SapphireRapids-v2'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='amx-bf16'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='amx-int8'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='amx-tile'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx-vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-bf16'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-fp16'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bitalg'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512ifma'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='bus-lock-detect'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fbsdp-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fsrc'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fsrm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fsrs'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fzrm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='gfni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='hle'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='ibrs-all'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='la57'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='psdp-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='rtm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='sbdr-ssdp-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='serialize'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='taa-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='tsx-ldtrk'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vaes'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xfd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xsaves'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='SapphireRapids-v3'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='amx-bf16'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='amx-int8'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='amx-tile'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx-vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-bf16'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-fp16'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bitalg'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512ifma'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='bus-lock-detect'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='cldemote'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fbsdp-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fsrc'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fsrm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fsrs'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fzrm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='gfni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='hle'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='ibrs-all'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='la57'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='movdir64b'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='movdiri'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='psdp-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='rtm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='sbdr-ssdp-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='serialize'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='ss'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='taa-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='tsx-ldtrk'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vaes'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xfd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xsaves'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='SierraForest'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx-ifma'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx-ne-convert'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx-vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx-vnni-int8'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='bus-lock-detect'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='cmpccxadd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fbsdp-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fsrm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fsrs'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='gfni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='ibrs-all'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='mcdt-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pbrsb-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='psdp-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='sbdr-ssdp-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='serialize'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vaes'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xsaves'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='SierraForest-v1'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx-ifma'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx-ne-convert'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx-vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx-vnni-int8'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='bus-lock-detect'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='cmpccxadd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fbsdp-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fsrm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fsrs'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='gfni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='ibrs-all'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='mcdt-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pbrsb-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='psdp-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='sbdr-ssdp-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='serialize'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vaes'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xsaves'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Skylake-Client'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='hle'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='rtm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Skylake-Client-IBRS'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='hle'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='rtm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Skylake-Client-v1'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='hle'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='rtm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Skylake-Client-v2'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='hle'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='rtm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Skylake-Client-v3'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Skylake-Client-v4'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xsaves'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Skylake-Server'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='hle'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='rtm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Skylake-Server-IBRS'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='hle'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='rtm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Skylake-Server-v1'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='hle'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='rtm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Skylake-Server-v2'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='hle'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='rtm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Skylake-Server-v3'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Skylake-Server-v4'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Skylake-Server-v5'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xsaves'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Snowridge'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='cldemote'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='core-capability'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='gfni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='movdir64b'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='movdiri'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='mpx'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='split-lock-detect'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Snowridge-v1'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='cldemote'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='core-capability'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='gfni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='movdir64b'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='movdiri'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='mpx'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='split-lock-detect'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Snowridge-v2'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='cldemote'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='core-capability'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='gfni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='movdir64b'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='movdiri'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='split-lock-detect'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Snowridge-v3'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='cldemote'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='core-capability'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='gfni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='movdir64b'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='movdiri'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='split-lock-detect'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xsaves'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Snowridge-v4'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='cldemote'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='gfni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='movdir64b'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='movdiri'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xsaves'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='athlon'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='3dnow'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='3dnowext'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='athlon-v1'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='3dnow'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='3dnowext'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='core2duo'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='ss'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='core2duo-v1'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='ss'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='coreduo'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='ss'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='coreduo-v1'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='ss'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='n270'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='ss'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='n270-v1'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='ss'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='phenom'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='3dnow'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='3dnowext'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='phenom-v1'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='3dnow'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='3dnowext'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     </mode>
Nov 29 06:43:06 compute-0 nova_compute[250764]:   </cpu>
Nov 29 06:43:06 compute-0 nova_compute[250764]:   <memoryBacking supported='yes'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     <enum name='sourceType'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <value>file</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <value>anonymous</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <value>memfd</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     </enum>
Nov 29 06:43:06 compute-0 nova_compute[250764]:   </memoryBacking>
Nov 29 06:43:06 compute-0 nova_compute[250764]:   <devices>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     <disk supported='yes'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <enum name='diskDevice'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>disk</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>cdrom</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>floppy</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>lun</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </enum>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <enum name='bus'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>fdc</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>scsi</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>virtio</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>usb</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>sata</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </enum>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <enum name='model'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>virtio</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>virtio-transitional</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>virtio-non-transitional</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </enum>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     </disk>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     <graphics supported='yes'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <enum name='type'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>vnc</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>egl-headless</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>dbus</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </enum>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     </graphics>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     <video supported='yes'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <enum name='modelType'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>vga</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>cirrus</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>virtio</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>none</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>bochs</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>ramfb</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </enum>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     </video>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     <hostdev supported='yes'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <enum name='mode'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>subsystem</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </enum>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <enum name='startupPolicy'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>default</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>mandatory</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>requisite</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>optional</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </enum>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <enum name='subsysType'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>usb</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>pci</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>scsi</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </enum>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <enum name='capsType'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <enum name='pciBackend'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     </hostdev>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     <rng supported='yes'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <enum name='model'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>virtio</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>virtio-transitional</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>virtio-non-transitional</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </enum>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <enum name='backendModel'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>random</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>egd</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>builtin</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </enum>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     </rng>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     <filesystem supported='yes'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <enum name='driverType'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>path</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>handle</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>virtiofs</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </enum>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     </filesystem>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     <tpm supported='yes'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <enum name='model'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>tpm-tis</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>tpm-crb</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </enum>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <enum name='backendModel'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>emulator</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>external</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </enum>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <enum name='backendVersion'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>2.0</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </enum>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     </tpm>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     <redirdev supported='yes'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <enum name='bus'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>usb</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </enum>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     </redirdev>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     <channel supported='yes'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <enum name='type'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>pty</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>unix</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </enum>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     </channel>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     <crypto supported='yes'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <enum name='model'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <enum name='type'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>qemu</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </enum>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <enum name='backendModel'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>builtin</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </enum>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     </crypto>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     <interface supported='yes'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <enum name='backendType'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>default</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>passt</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </enum>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     </interface>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     <panic supported='yes'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <enum name='model'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>isa</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>hyperv</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </enum>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     </panic>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     <console supported='yes'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <enum name='type'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>null</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>vc</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>pty</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>dev</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>file</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>pipe</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>stdio</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>udp</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>tcp</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>unix</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>qemu-vdagent</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>dbus</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </enum>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     </console>
Nov 29 06:43:06 compute-0 nova_compute[250764]:   </devices>
Nov 29 06:43:06 compute-0 nova_compute[250764]:   <features>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     <gic supported='no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     <vmcoreinfo supported='yes'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     <genid supported='yes'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     <backingStoreInput supported='yes'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     <backup supported='yes'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     <async-teardown supported='yes'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     <ps2 supported='yes'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     <sev supported='no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     <sgx supported='no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     <hyperv supported='yes'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <enum name='features'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>relaxed</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>vapic</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>spinlocks</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>vpindex</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>runtime</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>synic</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>stimer</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>reset</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>vendor_id</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>frequencies</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>reenlightenment</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>tlbflush</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>ipi</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>avic</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>emsr_bitmap</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>xmm_input</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </enum>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <defaults>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <spinlocks>4095</spinlocks>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <stimer_direct>on</stimer_direct>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <tlbflush_direct>on</tlbflush_direct>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <tlbflush_extended>on</tlbflush_extended>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </defaults>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     </hyperv>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     <launchSecurity supported='yes'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <enum name='sectype'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>tdx</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </enum>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     </launchSecurity>
Nov 29 06:43:06 compute-0 nova_compute[250764]:   </features>
Nov 29 06:43:06 compute-0 nova_compute[250764]: </domainCapabilities>
Nov 29 06:43:06 compute-0 nova_compute[250764]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 29 06:43:06 compute-0 nova_compute[250764]: 2025-11-29 06:43:06.352 250780 DEBUG nova.virt.libvirt.host [None req-0030f89b-8686-48a6-a1ec-9c3ff8f4b6a3 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Nov 29 06:43:06 compute-0 nova_compute[250764]: <domainCapabilities>
Nov 29 06:43:06 compute-0 nova_compute[250764]:   <path>/usr/libexec/qemu-kvm</path>
Nov 29 06:43:06 compute-0 nova_compute[250764]:   <domain>kvm</domain>
Nov 29 06:43:06 compute-0 nova_compute[250764]:   <machine>pc-i440fx-rhel7.6.0</machine>
Nov 29 06:43:06 compute-0 nova_compute[250764]:   <arch>x86_64</arch>
Nov 29 06:43:06 compute-0 nova_compute[250764]:   <vcpu max='240'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:   <iothreads supported='yes'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:   <os supported='yes'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     <enum name='firmware'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     <loader supported='yes'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <enum name='type'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>rom</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>pflash</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </enum>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <enum name='readonly'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>yes</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>no</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </enum>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <enum name='secure'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>no</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </enum>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     </loader>
Nov 29 06:43:06 compute-0 nova_compute[250764]:   </os>
Nov 29 06:43:06 compute-0 nova_compute[250764]:   <cpu>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     <mode name='host-passthrough' supported='yes'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <enum name='hostPassthroughMigratable'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>on</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>off</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </enum>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     </mode>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     <mode name='maximum' supported='yes'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <enum name='maximumMigratable'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>on</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>off</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </enum>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     </mode>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     <mode name='host-model' supported='yes'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model fallback='forbid'>EPYC-Rome</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <vendor>AMD</vendor>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <maxphysaddr mode='passthrough' limit='40'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <feature policy='require' name='x2apic'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <feature policy='require' name='tsc-deadline'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <feature policy='require' name='hypervisor'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <feature policy='require' name='tsc_adjust'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <feature policy='require' name='spec-ctrl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <feature policy='require' name='stibp'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <feature policy='require' name='ssbd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <feature policy='require' name='cmp_legacy'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <feature policy='require' name='overflow-recov'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <feature policy='require' name='succor'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <feature policy='require' name='ibrs'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <feature policy='require' name='amd-ssbd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <feature policy='require' name='virt-ssbd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <feature policy='require' name='lbrv'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <feature policy='require' name='tsc-scale'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <feature policy='require' name='vmcb-clean'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <feature policy='require' name='flushbyasid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <feature policy='require' name='pause-filter'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <feature policy='require' name='pfthreshold'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <feature policy='require' name='svme-addr-chk'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <feature policy='disable' name='xsaves'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     </mode>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     <mode name='custom' supported='yes'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Broadwell'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='hle'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='rtm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Broadwell-IBRS'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='hle'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='rtm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Broadwell-noTSX'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Broadwell-noTSX-IBRS'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Broadwell-v1'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='hle'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='rtm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Broadwell-v2'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Broadwell-v3'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='hle'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='rtm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Broadwell-v4'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Cascadelake-Server'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='hle'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='rtm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='ibrs-all'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Cascadelake-Server-v1'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='hle'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='rtm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Cascadelake-Server-v2'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='hle'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='ibrs-all'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='rtm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Cascadelake-Server-v3'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='ibrs-all'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Cascadelake-Server-v4'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='ibrs-all'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Cascadelake-Server-v5'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='ibrs-all'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xsaves'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Cooperlake'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-bf16'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='hle'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='ibrs-all'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='rtm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='taa-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Cooperlake-v1'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-bf16'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='hle'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='ibrs-all'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='rtm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='taa-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Cooperlake-v2'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-bf16'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='hle'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='ibrs-all'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='rtm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='taa-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xsaves'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Denverton'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='mpx'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Denverton-v1'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='mpx'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Denverton-v2'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Denverton-v3'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xsaves'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Dhyana-v2'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xsaves'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='EPYC-Genoa'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='amd-psfd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='auto-ibrs'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-bf16'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bitalg'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512ifma'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fsrm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='gfni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='la57'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='no-nested-data-bp'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='null-sel-clr-base'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='stibp-always-on'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vaes'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xsaves'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='EPYC-Genoa-v1'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='amd-psfd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='auto-ibrs'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-bf16'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bitalg'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512ifma'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fsrm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='gfni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='la57'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='no-nested-data-bp'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='null-sel-clr-base'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='stibp-always-on'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vaes'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xsaves'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='EPYC-Milan'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fsrm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xsaves'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='EPYC-Milan-v1'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fsrm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xsaves'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='EPYC-Milan-v2'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='amd-psfd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fsrm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='no-nested-data-bp'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='null-sel-clr-base'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='stibp-always-on'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vaes'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xsaves'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='EPYC-Rome'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xsaves'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='EPYC-Rome-v1'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xsaves'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='EPYC-Rome-v2'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xsaves'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='EPYC-Rome-v3'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xsaves'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='EPYC-v3'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xsaves'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='EPYC-v4'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xsaves'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='GraniteRapids'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='amx-bf16'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='amx-fp16'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='amx-int8'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='amx-tile'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx-vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-bf16'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-fp16'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bitalg'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512ifma'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='bus-lock-detect'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fbsdp-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fsrc'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fsrm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fsrs'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fzrm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='gfni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='hle'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='ibrs-all'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='la57'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='mcdt-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pbrsb-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='prefetchiti'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='psdp-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='rtm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='sbdr-ssdp-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='serialize'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='taa-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='tsx-ldtrk'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vaes'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xfd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xsaves'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='GraniteRapids-v1'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='amx-bf16'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='amx-fp16'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='amx-int8'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='amx-tile'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx-vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-bf16'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-fp16'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bitalg'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512ifma'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='bus-lock-detect'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fbsdp-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fsrc'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fsrm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fsrs'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fzrm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='gfni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='hle'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='ibrs-all'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='la57'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='mcdt-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pbrsb-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='prefetchiti'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='psdp-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='rtm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='sbdr-ssdp-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='serialize'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='taa-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='tsx-ldtrk'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vaes'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xfd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xsaves'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='GraniteRapids-v2'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='amx-bf16'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='amx-fp16'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='amx-int8'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='amx-tile'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx-vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx10'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx10-128'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx10-256'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx10-512'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-bf16'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-fp16'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bitalg'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512ifma'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='bus-lock-detect'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='cldemote'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fbsdp-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fsrc'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fsrm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fsrs'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fzrm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='gfni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='hle'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='ibrs-all'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='la57'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='mcdt-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='movdir64b'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='movdiri'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pbrsb-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='prefetchiti'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='psdp-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='rtm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='sbdr-ssdp-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='serialize'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='ss'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='taa-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='tsx-ldtrk'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vaes'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xfd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xsaves'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Haswell'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='hle'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='rtm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Haswell-IBRS'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='hle'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='rtm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Haswell-noTSX'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Haswell-noTSX-IBRS'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Haswell-v1'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='hle'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='rtm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Haswell-v2'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Haswell-v3'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='hle'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='rtm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Haswell-v4'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Icelake-Server'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bitalg'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='gfni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='hle'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='la57'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='rtm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vaes'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Icelake-Server-noTSX'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bitalg'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='gfni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='la57'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vaes'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Icelake-Server-v1'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bitalg'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='gfni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='hle'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='la57'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='rtm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vaes'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Icelake-Server-v2'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bitalg'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='gfni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='la57'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vaes'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Icelake-Server-v3'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bitalg'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='gfni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='ibrs-all'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='la57'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='taa-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vaes'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Icelake-Server-v4'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bitalg'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512ifma'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fsrm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='gfni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='ibrs-all'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='la57'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='taa-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vaes'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Icelake-Server-v5'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bitalg'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512ifma'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fsrm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='gfni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='ibrs-all'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='la57'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='taa-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vaes'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xsaves'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Icelake-Server-v6'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bitalg'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512ifma'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fsrm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='gfni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='ibrs-all'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='la57'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='taa-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vaes'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xsaves'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Icelake-Server-v7'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bitalg'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512ifma'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fsrm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='gfni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='hle'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='ibrs-all'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='la57'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='rtm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='taa-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vaes'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xsaves'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='IvyBridge'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='IvyBridge-IBRS'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='IvyBridge-v1'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='IvyBridge-v2'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='KnightsMill'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-4fmaps'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-4vnniw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512er'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512pf'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='ss'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='KnightsMill-v1'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-4fmaps'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-4vnniw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512er'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512pf'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='ss'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Opteron_G4'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fma4'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xop'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Opteron_G4-v1'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fma4'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xop'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Opteron_G5'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fma4'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='tbm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xop'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Opteron_G5-v1'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fma4'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='tbm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xop'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='SapphireRapids'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='amx-bf16'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='amx-int8'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='amx-tile'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx-vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-bf16'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-fp16'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bitalg'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512ifma'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='bus-lock-detect'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fsrc'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fsrm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fsrs'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fzrm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='gfni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='hle'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='ibrs-all'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='la57'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='rtm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='serialize'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='taa-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='tsx-ldtrk'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vaes'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xfd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xsaves'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='SapphireRapids-v1'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='amx-bf16'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='amx-int8'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='amx-tile'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx-vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-bf16'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-fp16'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bitalg'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512ifma'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='bus-lock-detect'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fsrc'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fsrm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fsrs'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fzrm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='gfni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='hle'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='ibrs-all'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='la57'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='rtm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='serialize'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='taa-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='tsx-ldtrk'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vaes'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xfd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xsaves'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='SapphireRapids-v2'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='amx-bf16'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='amx-int8'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='amx-tile'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx-vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-bf16'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-fp16'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bitalg'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512ifma'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='bus-lock-detect'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fbsdp-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fsrc'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fsrm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fsrs'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fzrm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='gfni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='hle'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='ibrs-all'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='la57'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='psdp-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='rtm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='sbdr-ssdp-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='serialize'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='taa-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='tsx-ldtrk'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vaes'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xfd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xsaves'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='SapphireRapids-v3'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='amx-bf16'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='amx-int8'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='amx-tile'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx-vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-bf16'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-fp16'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bitalg'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512ifma'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='bus-lock-detect'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='cldemote'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fbsdp-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fsrc'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fsrm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fsrs'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fzrm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='gfni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='hle'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='ibrs-all'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='la57'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='movdir64b'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='movdiri'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='psdp-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='rtm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='sbdr-ssdp-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='serialize'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='ss'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='taa-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='tsx-ldtrk'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vaes'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xfd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xsaves'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='SierraForest'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx-ifma'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx-ne-convert'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx-vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx-vnni-int8'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='bus-lock-detect'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='cmpccxadd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fbsdp-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fsrm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fsrs'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='gfni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='ibrs-all'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='mcdt-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pbrsb-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='psdp-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='sbdr-ssdp-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='serialize'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vaes'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xsaves'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='SierraForest-v1'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx-ifma'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx-ne-convert'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx-vnni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx-vnni-int8'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='bus-lock-detect'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='cmpccxadd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fbsdp-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fsrm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='fsrs'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='gfni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='ibrs-all'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='mcdt-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pbrsb-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='psdp-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='sbdr-ssdp-no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='serialize'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vaes'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xsaves'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Skylake-Client'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='hle'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='rtm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Skylake-Client-IBRS'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='hle'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='rtm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Skylake-Client-v1'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='hle'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='rtm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Skylake-Client-v2'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='hle'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='rtm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Skylake-Client-v3'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Skylake-Client-v4'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xsaves'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Skylake-Server'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='hle'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='rtm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Skylake-Server-IBRS'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='hle'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='rtm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Skylake-Server-v1'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='hle'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='rtm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Skylake-Server-v2'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='hle'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='rtm'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Skylake-Server-v3'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Skylake-Server-v4'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Skylake-Server-v5'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512bw'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512cd'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512dq'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512f'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='avx512vl'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='invpcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pcid'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='pku'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xsaves'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Snowridge'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='cldemote'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='core-capability'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='gfni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='movdir64b'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='movdiri'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='mpx'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='split-lock-detect'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Snowridge-v1'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='cldemote'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='core-capability'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='gfni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='movdir64b'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='movdiri'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='mpx'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='split-lock-detect'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Snowridge-v2'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='cldemote'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='core-capability'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='gfni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='movdir64b'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='movdiri'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='split-lock-detect'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Snowridge-v3'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='cldemote'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='core-capability'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='gfni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='movdir64b'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='movdiri'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='split-lock-detect'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xsaves'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='Snowridge-v4'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='cldemote'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='erms'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='gfni'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='movdir64b'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='movdiri'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='xsaves'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='athlon'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='3dnow'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='3dnowext'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='athlon-v1'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='3dnow'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='3dnowext'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='core2duo'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='ss'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='core2duo-v1'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='ss'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='coreduo'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='ss'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='coreduo-v1'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='ss'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='n270'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='ss'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='n270-v1'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='ss'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='phenom'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='3dnow'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='3dnowext'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <blockers model='phenom-v1'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='3dnow'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <feature name='3dnowext'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </blockers>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     </mode>
Nov 29 06:43:06 compute-0 nova_compute[250764]:   </cpu>
Nov 29 06:43:06 compute-0 nova_compute[250764]:   <memoryBacking supported='yes'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     <enum name='sourceType'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <value>file</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <value>anonymous</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <value>memfd</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     </enum>
Nov 29 06:43:06 compute-0 nova_compute[250764]:   </memoryBacking>
Nov 29 06:43:06 compute-0 nova_compute[250764]:   <devices>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     <disk supported='yes'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <enum name='diskDevice'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>disk</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>cdrom</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>floppy</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>lun</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </enum>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <enum name='bus'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>ide</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>fdc</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>scsi</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>virtio</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>usb</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>sata</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </enum>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <enum name='model'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>virtio</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>virtio-transitional</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>virtio-non-transitional</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </enum>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     </disk>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     <graphics supported='yes'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <enum name='type'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>vnc</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>egl-headless</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>dbus</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </enum>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     </graphics>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     <video supported='yes'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <enum name='modelType'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>vga</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>cirrus</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>virtio</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>none</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>bochs</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>ramfb</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </enum>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     </video>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     <hostdev supported='yes'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <enum name='mode'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>subsystem</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </enum>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <enum name='startupPolicy'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>default</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>mandatory</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>requisite</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>optional</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </enum>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <enum name='subsysType'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>usb</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>pci</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>scsi</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </enum>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <enum name='capsType'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <enum name='pciBackend'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     </hostdev>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     <rng supported='yes'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <enum name='model'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>virtio</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>virtio-transitional</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>virtio-non-transitional</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </enum>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <enum name='backendModel'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>random</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>egd</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>builtin</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </enum>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     </rng>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     <filesystem supported='yes'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <enum name='driverType'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>path</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>handle</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>virtiofs</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </enum>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     </filesystem>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     <tpm supported='yes'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <enum name='model'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>tpm-tis</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>tpm-crb</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </enum>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <enum name='backendModel'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>emulator</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>external</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </enum>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <enum name='backendVersion'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>2.0</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </enum>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     </tpm>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     <redirdev supported='yes'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <enum name='bus'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>usb</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </enum>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     </redirdev>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     <channel supported='yes'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <enum name='type'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>pty</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>unix</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </enum>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     </channel>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     <crypto supported='yes'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <enum name='model'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <enum name='type'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>qemu</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </enum>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <enum name='backendModel'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>builtin</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </enum>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     </crypto>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     <interface supported='yes'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <enum name='backendType'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>default</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>passt</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </enum>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     </interface>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     <panic supported='yes'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <enum name='model'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>isa</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>hyperv</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </enum>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     </panic>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     <console supported='yes'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <enum name='type'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>null</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>vc</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>pty</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>dev</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>file</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>pipe</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>stdio</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>udp</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>tcp</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>unix</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>qemu-vdagent</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>dbus</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </enum>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     </console>
Nov 29 06:43:06 compute-0 nova_compute[250764]:   </devices>
Nov 29 06:43:06 compute-0 nova_compute[250764]:   <features>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     <gic supported='no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     <vmcoreinfo supported='yes'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     <genid supported='yes'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     <backingStoreInput supported='yes'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     <backup supported='yes'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     <async-teardown supported='yes'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     <ps2 supported='yes'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     <sev supported='no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     <sgx supported='no'/>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     <hyperv supported='yes'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <enum name='features'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>relaxed</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>vapic</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>spinlocks</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>vpindex</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>runtime</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>synic</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>stimer</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>reset</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>vendor_id</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>frequencies</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>reenlightenment</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>tlbflush</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>ipi</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>avic</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>emsr_bitmap</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>xmm_input</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </enum>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <defaults>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <spinlocks>4095</spinlocks>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <stimer_direct>on</stimer_direct>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <tlbflush_direct>on</tlbflush_direct>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <tlbflush_extended>on</tlbflush_extended>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </defaults>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     </hyperv>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     <launchSecurity supported='yes'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       <enum name='sectype'>
Nov 29 06:43:06 compute-0 nova_compute[250764]:         <value>tdx</value>
Nov 29 06:43:06 compute-0 nova_compute[250764]:       </enum>
Nov 29 06:43:06 compute-0 nova_compute[250764]:     </launchSecurity>
Nov 29 06:43:06 compute-0 nova_compute[250764]:   </features>
Nov 29 06:43:06 compute-0 nova_compute[250764]: </domainCapabilities>
Nov 29 06:43:06 compute-0 nova_compute[250764]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 29 06:43:06 compute-0 nova_compute[250764]: 2025-11-29 06:43:06.417 250780 DEBUG nova.virt.libvirt.host [None req-0030f89b-8686-48a6-a1ec-9c3ff8f4b6a3 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Nov 29 06:43:06 compute-0 nova_compute[250764]: 2025-11-29 06:43:06.418 250780 INFO nova.virt.libvirt.host [None req-0030f89b-8686-48a6-a1ec-9c3ff8f4b6a3 - - - - - -] Secure Boot support detected
Nov 29 06:43:06 compute-0 nova_compute[250764]: 2025-11-29 06:43:06.420 250780 INFO nova.virt.libvirt.driver [None req-0030f89b-8686-48a6-a1ec-9c3ff8f4b6a3 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Nov 29 06:43:06 compute-0 nova_compute[250764]: 2025-11-29 06:43:06.420 250780 INFO nova.virt.libvirt.driver [None req-0030f89b-8686-48a6-a1ec-9c3ff8f4b6a3 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Nov 29 06:43:06 compute-0 nova_compute[250764]: 2025-11-29 06:43:06.433 250780 DEBUG nova.virt.libvirt.driver [None req-0030f89b-8686-48a6-a1ec-9c3ff8f4b6a3 - - - - - -] cpu compare xml: <cpu match="exact">
Nov 29 06:43:06 compute-0 nova_compute[250764]:   <model>Nehalem</model>
Nov 29 06:43:06 compute-0 nova_compute[250764]: </cpu>
Nov 29 06:43:06 compute-0 nova_compute[250764]:  _compare_cpu /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10019
Nov 29 06:43:06 compute-0 nova_compute[250764]: 2025-11-29 06:43:06.437 250780 DEBUG nova.virt.libvirt.driver [None req-0030f89b-8686-48a6-a1ec-9c3ff8f4b6a3 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Nov 29 06:43:06 compute-0 nova_compute[250764]: 2025-11-29 06:43:06.471 250780 INFO nova.virt.node [None req-0030f89b-8686-48a6-a1ec-9c3ff8f4b6a3 - - - - - -] Determined node identity 36ed0248-8d04-4532-95bb-daab89f12202 from /var/lib/nova/compute_id
Nov 29 06:43:06 compute-0 nova_compute[250764]: 2025-11-29 06:43:06.565 250780 WARNING nova.compute.manager [None req-0030f89b-8686-48a6-a1ec-9c3ff8f4b6a3 - - - - - -] Compute nodes ['36ed0248-8d04-4532-95bb-daab89f12202'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.
Nov 29 06:43:06 compute-0 nova_compute[250764]: 2025-11-29 06:43:06.606 250780 INFO nova.compute.manager [None req-0030f89b-8686-48a6-a1ec-9c3ff8f4b6a3 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Nov 29 06:43:06 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v920: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:43:06 compute-0 sudo[251740]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mohhyakbisaxcgblttzkgzuacvefsgyh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398586.4161332-4362-223726827736218/AnsiballZ_systemd.py'
Nov 29 06:43:06 compute-0 sudo[251740]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:43:06 compute-0 nova_compute[250764]: 2025-11-29 06:43:06.765 250780 WARNING nova.compute.manager [None req-0030f89b-8686-48a6-a1ec-9c3ff8f4b6a3 - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Nov 29 06:43:06 compute-0 nova_compute[250764]: 2025-11-29 06:43:06.765 250780 DEBUG oslo_concurrency.lockutils [None req-0030f89b-8686-48a6-a1ec-9c3ff8f4b6a3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 06:43:06 compute-0 nova_compute[250764]: 2025-11-29 06:43:06.766 250780 DEBUG oslo_concurrency.lockutils [None req-0030f89b-8686-48a6-a1ec-9c3ff8f4b6a3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 06:43:06 compute-0 nova_compute[250764]: 2025-11-29 06:43:06.766 250780 DEBUG oslo_concurrency.lockutils [None req-0030f89b-8686-48a6-a1ec-9c3ff8f4b6a3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 06:43:06 compute-0 nova_compute[250764]: 2025-11-29 06:43:06.766 250780 DEBUG nova.compute.resource_tracker [None req-0030f89b-8686-48a6-a1ec-9c3ff8f4b6a3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 06:43:06 compute-0 nova_compute[250764]: 2025-11-29 06:43:06.766 250780 DEBUG oslo_concurrency.processutils [None req-0030f89b-8686-48a6-a1ec-9c3ff8f4b6a3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 06:43:06 compute-0 ceph-mon[74654]: from='client.? 192.168.122.102:0/3410839554' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 06:43:06 compute-0 ceph-mon[74654]: from='client.? 192.168.122.101:0/3797343968' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 06:43:07 compute-0 python3.9[251742]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 06:43:07 compute-0 systemd[1]: Stopping nova_compute container...
Nov 29 06:43:07 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 06:43:07 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2326258606' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 06:43:07 compute-0 nova_compute[250764]: 2025-11-29 06:43:07.200 250780 DEBUG oslo_concurrency.processutils [None req-0030f89b-8686-48a6-a1ec-9c3ff8f4b6a3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 06:43:07 compute-0 nova_compute[250764]: 2025-11-29 06:43:07.214 250780 DEBUG oslo_concurrency.lockutils [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 06:43:07 compute-0 nova_compute[250764]: 2025-11-29 06:43:07.215 250780 DEBUG oslo_concurrency.lockutils [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 06:43:07 compute-0 nova_compute[250764]: 2025-11-29 06:43:07.215 250780 DEBUG oslo_concurrency.lockutils [None req-73b18892-7e23-4b9d-83bd-342ce28c38c5 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 06:43:07 compute-0 sshd-session[251563]: Invalid user mike from 103.31.39.143 port 60734
Nov 29 06:43:07 compute-0 podman[251784]: 2025-11-29 06:43:07.592352785 +0000 UTC m=+0.064373551 container health_status 81ea2bcb89266a0110a379c2083d8cc042460d4a35c8ed3bf349dd1083925000 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent)
Nov 29 06:43:07 compute-0 virtqemud[251417]: libvirt version: 11.9.0, package: 1.el9 (builder@centos.org, 2025-11-04-09:54:50, )
Nov 29 06:43:07 compute-0 virtqemud[251417]: hostname: compute-0
Nov 29 06:43:07 compute-0 virtqemud[251417]: End of file while reading data: Input/output error
Nov 29 06:43:07 compute-0 systemd[1]: libpod-e2ad515a2dbc402235ed00e4020353b5a12eaf8adb18cd2c92ca85ab5e8c64a4.scope: Deactivated successfully.
Nov 29 06:43:07 compute-0 systemd[1]: libpod-e2ad515a2dbc402235ed00e4020353b5a12eaf8adb18cd2c92ca85ab5e8c64a4.scope: Consumed 4.017s CPU time.
Nov 29 06:43:07 compute-0 podman[251767]: 2025-11-29 06:43:07.60847228 +0000 UTC m=+0.452775015 container died e2ad515a2dbc402235ed00e4020353b5a12eaf8adb18cd2c92ca85ab5e8c64a4 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=nova_compute, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 06:43:07 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e2ad515a2dbc402235ed00e4020353b5a12eaf8adb18cd2c92ca85ab5e8c64a4-userdata-shm.mount: Deactivated successfully.
Nov 29 06:43:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-d797c2b4e3996a56f9f8a6e9a63d3adde8833aeb2ad9cc0fab53d65e4c7eafbb-merged.mount: Deactivated successfully.
Nov 29 06:43:07 compute-0 podman[251785]: 2025-11-29 06:43:07.657023112 +0000 UTC m=+0.126599728 container health_status b3f42e9a710907b47913576d27471d163da731262c1464357cff24681ce600c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 06:43:07 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:43:07 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:43:07 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:43:07.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:43:07 compute-0 sshd-session[251563]: Received disconnect from 103.31.39.143 port 60734:11: Bye Bye [preauth]
Nov 29 06:43:07 compute-0 sshd-session[251563]: Disconnected from invalid user mike 103.31.39.143 port 60734 [preauth]
Nov 29 06:43:08 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:43:08 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:43:08 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:43:08.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:43:08 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v921: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:43:09 compute-0 sshd-session[251843]: Invalid user support from 49.247.35.31 port 23400
Nov 29 06:43:09 compute-0 sshd-session[251846]: Invalid user hadoop from 103.143.238.173 port 52708
Nov 29 06:43:09 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:43:09 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:43:09 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:43:09.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:43:09 compute-0 sshd-session[251846]: Received disconnect from 103.143.238.173 port 52708:11: Bye Bye [preauth]
Nov 29 06:43:09 compute-0 sshd-session[251846]: Disconnected from invalid user hadoop 103.143.238.173 port 52708 [preauth]
Nov 29 06:43:09 compute-0 sshd-session[251843]: Received disconnect from 49.247.35.31 port 23400:11: Bye Bye [preauth]
Nov 29 06:43:09 compute-0 sshd-session[251843]: Disconnected from invalid user support 49.247.35.31 port 23400 [preauth]
Nov 29 06:43:10 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:43:10 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:43:10 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:43:10.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:43:10 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:43:10 compute-0 podman[251767]: 2025-11-29 06:43:10.337968808 +0000 UTC m=+3.182271533 container cleanup e2ad515a2dbc402235ed00e4020353b5a12eaf8adb18cd2c92ca85ab5e8c64a4 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, container_name=nova_compute, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 06:43:10 compute-0 podman[251767]: nova_compute
Nov 29 06:43:10 compute-0 podman[251848]: nova_compute
Nov 29 06:43:10 compute-0 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Nov 29 06:43:10 compute-0 systemd[1]: Stopped nova_compute container.
Nov 29 06:43:10 compute-0 systemd[1]: Starting nova_compute container...
Nov 29 06:43:10 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:43:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d797c2b4e3996a56f9f8a6e9a63d3adde8833aeb2ad9cc0fab53d65e4c7eafbb/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 29 06:43:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d797c2b4e3996a56f9f8a6e9a63d3adde8833aeb2ad9cc0fab53d65e4c7eafbb/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Nov 29 06:43:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d797c2b4e3996a56f9f8a6e9a63d3adde8833aeb2ad9cc0fab53d65e4c7eafbb/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 29 06:43:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d797c2b4e3996a56f9f8a6e9a63d3adde8833aeb2ad9cc0fab53d65e4c7eafbb/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Nov 29 06:43:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d797c2b4e3996a56f9f8a6e9a63d3adde8833aeb2ad9cc0fab53d65e4c7eafbb/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 29 06:43:10 compute-0 podman[251861]: 2025-11-29 06:43:10.633233048 +0000 UTC m=+0.187577193 container init e2ad515a2dbc402235ed00e4020353b5a12eaf8adb18cd2c92ca85ab5e8c64a4 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, container_name=nova_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 06:43:10 compute-0 podman[251861]: 2025-11-29 06:43:10.646142562 +0000 UTC m=+0.200486657 container start e2ad515a2dbc402235ed00e4020353b5a12eaf8adb18cd2c92ca85ab5e8c64a4 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, container_name=nova_compute, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125)
Nov 29 06:43:10 compute-0 podman[251861]: nova_compute
Nov 29 06:43:10 compute-0 nova_compute[251877]: + sudo -E kolla_set_configs
Nov 29 06:43:10 compute-0 systemd[1]: Started nova_compute container.
Nov 29 06:43:10 compute-0 sudo[251740]: pam_unix(sudo:session): session closed for user root
Nov 29 06:43:10 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v922: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:43:10 compute-0 nova_compute[251877]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 29 06:43:10 compute-0 nova_compute[251877]: INFO:__main__:Validating config file
Nov 29 06:43:10 compute-0 nova_compute[251877]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 29 06:43:10 compute-0 nova_compute[251877]: INFO:__main__:Copying service configuration files
Nov 29 06:43:10 compute-0 nova_compute[251877]: INFO:__main__:Deleting /etc/nova/nova.conf
Nov 29 06:43:10 compute-0 nova_compute[251877]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Nov 29 06:43:10 compute-0 nova_compute[251877]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Nov 29 06:43:10 compute-0 nova_compute[251877]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Nov 29 06:43:10 compute-0 nova_compute[251877]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Nov 29 06:43:10 compute-0 nova_compute[251877]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Nov 29 06:43:10 compute-0 nova_compute[251877]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 29 06:43:10 compute-0 nova_compute[251877]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 29 06:43:10 compute-0 nova_compute[251877]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 29 06:43:10 compute-0 nova_compute[251877]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 29 06:43:10 compute-0 nova_compute[251877]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 29 06:43:10 compute-0 nova_compute[251877]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 29 06:43:10 compute-0 nova_compute[251877]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Nov 29 06:43:10 compute-0 nova_compute[251877]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Nov 29 06:43:10 compute-0 nova_compute[251877]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Nov 29 06:43:10 compute-0 nova_compute[251877]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 29 06:43:10 compute-0 nova_compute[251877]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 29 06:43:10 compute-0 nova_compute[251877]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 29 06:43:10 compute-0 nova_compute[251877]: INFO:__main__:Deleting /etc/ceph
Nov 29 06:43:10 compute-0 nova_compute[251877]: INFO:__main__:Creating directory /etc/ceph
Nov 29 06:43:10 compute-0 nova_compute[251877]: INFO:__main__:Setting permission for /etc/ceph
Nov 29 06:43:10 compute-0 nova_compute[251877]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Nov 29 06:43:10 compute-0 nova_compute[251877]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 29 06:43:10 compute-0 nova_compute[251877]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Nov 29 06:43:10 compute-0 nova_compute[251877]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 29 06:43:10 compute-0 nova_compute[251877]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Nov 29 06:43:10 compute-0 nova_compute[251877]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Nov 29 06:43:10 compute-0 nova_compute[251877]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 29 06:43:10 compute-0 nova_compute[251877]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Nov 29 06:43:10 compute-0 nova_compute[251877]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Nov 29 06:43:10 compute-0 nova_compute[251877]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 29 06:43:10 compute-0 nova_compute[251877]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Nov 29 06:43:10 compute-0 nova_compute[251877]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Nov 29 06:43:10 compute-0 nova_compute[251877]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Nov 29 06:43:10 compute-0 nova_compute[251877]: INFO:__main__:Writing out command to execute
Nov 29 06:43:10 compute-0 nova_compute[251877]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 29 06:43:10 compute-0 nova_compute[251877]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 29 06:43:10 compute-0 nova_compute[251877]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Nov 29 06:43:10 compute-0 nova_compute[251877]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 29 06:43:10 compute-0 nova_compute[251877]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 29 06:43:10 compute-0 nova_compute[251877]: ++ cat /run_command
Nov 29 06:43:10 compute-0 nova_compute[251877]: + CMD=nova-compute
Nov 29 06:43:10 compute-0 nova_compute[251877]: + ARGS=
Nov 29 06:43:10 compute-0 nova_compute[251877]: + sudo kolla_copy_cacerts
Nov 29 06:43:10 compute-0 nova_compute[251877]: + [[ ! -n '' ]]
Nov 29 06:43:10 compute-0 nova_compute[251877]: + . kolla_extend_start
Nov 29 06:43:10 compute-0 nova_compute[251877]: Running command: 'nova-compute'
Nov 29 06:43:10 compute-0 nova_compute[251877]: + echo 'Running command: '\''nova-compute'\'''
Nov 29 06:43:10 compute-0 nova_compute[251877]: + umask 0022
Nov 29 06:43:10 compute-0 nova_compute[251877]: + exec nova-compute
Nov 29 06:43:11 compute-0 ceph-mon[74654]: pgmap v920: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:43:11 compute-0 ceph-mon[74654]: from='client.? 192.168.122.100:0/2326258606' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 06:43:11 compute-0 sudo[251916]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:43:11 compute-0 sudo[251916]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:43:11 compute-0 sudo[251916]: pam_unix(sudo:session): session closed for user root
Nov 29 06:43:11 compute-0 sshd-session[251914]: Received disconnect from 162.214.92.14 port 52260:11: Bye Bye [preauth]
Nov 29 06:43:11 compute-0 sshd-session[251914]: Disconnected from authenticating user root 162.214.92.14 port 52260 [preauth]
Nov 29 06:43:11 compute-0 sudo[251942]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:43:11 compute-0 sudo[251942]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:43:11 compute-0 sudo[251942]: pam_unix(sudo:session): session closed for user root
Nov 29 06:43:11 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:43:11 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:43:11 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:43:11.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:43:12 compute-0 sudo[252092]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bbqrxlutdcjcxfginwspgfzpqqabjrwo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764398591.91471-4389-236239775103475/AnsiballZ_podman_container.py'
Nov 29 06:43:12 compute-0 sudo[252092]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:43:12 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:43:12 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:43:12 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:43:12.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:43:12 compute-0 python3.9[252094]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Nov 29 06:43:12 compute-0 systemd[1]: Started libpod-conmon-ab476ee339f2a8c5fbac787c0045404c7acedcfbdff6a82cef58a23ba6e42f8b.scope.
Nov 29 06:43:12 compute-0 ceph-mon[74654]: pgmap v921: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:43:12 compute-0 ceph-mon[74654]: pgmap v922: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:43:12 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:43:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79cc7b3cfdcdf9a6442ddab84d404c5723027fe9c95bf1e8860f3d26cf96a0c6/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Nov 29 06:43:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79cc7b3cfdcdf9a6442ddab84d404c5723027fe9c95bf1e8860f3d26cf96a0c6/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 29 06:43:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79cc7b3cfdcdf9a6442ddab84d404c5723027fe9c95bf1e8860f3d26cf96a0c6/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Nov 29 06:43:12 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v923: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:43:12 compute-0 nova_compute[251877]: 2025-11-29 06:43:12.759 251881 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Nov 29 06:43:12 compute-0 nova_compute[251877]: 2025-11-29 06:43:12.759 251881 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Nov 29 06:43:12 compute-0 nova_compute[251877]: 2025-11-29 06:43:12.759 251881 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Nov 29 06:43:12 compute-0 nova_compute[251877]: 2025-11-29 06:43:12.760 251881 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Nov 29 06:43:12 compute-0 podman[252119]: 2025-11-29 06:43:12.829316885 +0000 UTC m=+0.228603661 container init ab476ee339f2a8c5fbac787c0045404c7acedcfbdff6a82cef58a23ba6e42f8b (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=nova_compute_init, tcib_managed=true)
Nov 29 06:43:12 compute-0 podman[252119]: 2025-11-29 06:43:12.850460001 +0000 UTC m=+0.249746797 container start ab476ee339f2a8c5fbac787c0045404c7acedcfbdff6a82cef58a23ba6e42f8b (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=nova_compute_init, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, managed_by=edpm_ansible)
Nov 29 06:43:12 compute-0 nova_compute[251877]: 2025-11-29 06:43:12.917 251881 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 06:43:12 compute-0 nova_compute_init[252143]: INFO:nova_statedir:Applying nova statedir ownership
Nov 29 06:43:12 compute-0 nova_compute_init[252143]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Nov 29 06:43:12 compute-0 nova_compute_init[252143]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Nov 29 06:43:12 compute-0 nova_compute_init[252143]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Nov 29 06:43:12 compute-0 nova_compute_init[252143]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Nov 29 06:43:12 compute-0 nova_compute_init[252143]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Nov 29 06:43:12 compute-0 nova_compute_init[252143]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Nov 29 06:43:12 compute-0 nova_compute_init[252143]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Nov 29 06:43:12 compute-0 nova_compute_init[252143]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Nov 29 06:43:12 compute-0 nova_compute_init[252143]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Nov 29 06:43:12 compute-0 nova_compute_init[252143]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Nov 29 06:43:12 compute-0 nova_compute_init[252143]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Nov 29 06:43:12 compute-0 nova_compute_init[252143]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Nov 29 06:43:12 compute-0 nova_compute_init[252143]: INFO:nova_statedir:Nova statedir ownership complete
Nov 29 06:43:12 compute-0 python3.9[252094]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Nov 29 06:43:12 compute-0 systemd[1]: libpod-ab476ee339f2a8c5fbac787c0045404c7acedcfbdff6a82cef58a23ba6e42f8b.scope: Deactivated successfully.
Nov 29 06:43:12 compute-0 nova_compute[251877]: 2025-11-29 06:43:12.947 251881 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.030s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 06:43:12 compute-0 nova_compute[251877]: 2025-11-29 06:43:12.948 251881 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Nov 29 06:43:13 compute-0 podman[252145]: 2025-11-29 06:43:12.999481426 +0000 UTC m=+0.044398384 container died ab476ee339f2a8c5fbac787c0045404c7acedcfbdff6a82cef58a23ba6e42f8b (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=edpm, container_name=nova_compute_init, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.schema-version=1.0)
Nov 29 06:43:13 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ab476ee339f2a8c5fbac787c0045404c7acedcfbdff6a82cef58a23ba6e42f8b-userdata-shm.mount: Deactivated successfully.
Nov 29 06:43:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-79cc7b3cfdcdf9a6442ddab84d404c5723027fe9c95bf1e8860f3d26cf96a0c6-merged.mount: Deactivated successfully.
Nov 29 06:43:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 06:43:13 compute-0 podman[252147]: 2025-11-29 06:43:13.052766549 +0000 UTC m=+0.093439077 container cleanup ab476ee339f2a8c5fbac787c0045404c7acedcfbdff6a82cef58a23ba6e42f8b (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, container_name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 29 06:43:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:43:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 06:43:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:43:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:43:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:43:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:43:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:43:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:43:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:43:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:43:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:43:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 06:43:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:43:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:43:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:43:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Nov 29 06:43:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:43:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 06:43:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:43:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:43:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:43:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 06:43:13 compute-0 systemd[1]: libpod-conmon-ab476ee339f2a8c5fbac787c0045404c7acedcfbdff6a82cef58a23ba6e42f8b.scope: Deactivated successfully.
Nov 29 06:43:13 compute-0 sudo[252092]: pam_unix(sudo:session): session closed for user root
Nov 29 06:43:13 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:43:13 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:43:13 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:43:13.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:43:14 compute-0 ceph-mon[74654]: pgmap v923: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:43:14 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:43:14 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:43:14 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:43:14.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:43:14 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v924: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:43:15 compute-0 ceph-mon[74654]: pgmap v924: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:43:15 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:43:15 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:43:15 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:43:15 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:43:15.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:43:16 compute-0 sshd-session[219505]: Connection closed by 192.168.122.30 port 56050
Nov 29 06:43:16 compute-0 sshd-session[219499]: pam_unix(sshd:session): session closed for user zuul
Nov 29 06:43:16 compute-0 systemd[1]: session-50.scope: Deactivated successfully.
Nov 29 06:43:16 compute-0 systemd[1]: session-50.scope: Consumed 2min 30.922s CPU time.
Nov 29 06:43:16 compute-0 systemd-logind[797]: Session 50 logged out. Waiting for processes to exit.
Nov 29 06:43:16 compute-0 systemd-logind[797]: Removed session 50.
Nov 29 06:43:16 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:43:16 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:43:16 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:43:16.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:43:16 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v925: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:43:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:43:17.231 157767 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 06:43:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:43:17.232 157767 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 06:43:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:43:17.233 157767 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 06:43:17 compute-0 nova_compute[251877]: 2025-11-29 06:43:17.295 251881 INFO nova.virt.driver [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Nov 29 06:43:17 compute-0 nova_compute[251877]: 2025-11-29 06:43:17.414 251881 INFO nova.compute.provider_config [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Nov 29 06:43:17 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:43:17 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:43:17 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:43:17.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:43:17 compute-0 ceph-mon[74654]: pgmap v925: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:43:18 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:43:18 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:43:18 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:43:18.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:43:18 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v926: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.951 251881 DEBUG oslo_concurrency.lockutils [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.951 251881 DEBUG oslo_concurrency.lockutils [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.952 251881 DEBUG oslo_concurrency.lockutils [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.953 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.953 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.953 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.953 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.954 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.954 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.954 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.955 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.955 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.955 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.956 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.956 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.956 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.957 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.957 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.957 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.958 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.958 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.958 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.959 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.959 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.959 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.960 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.960 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.961 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.961 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.962 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.962 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.962 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.963 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.963 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.963 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.964 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.964 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.964 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.965 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.965 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.966 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.966 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.967 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.967 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.967 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.968 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.968 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.968 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.969 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.969 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.969 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.970 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.970 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.971 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.971 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.972 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.972 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.972 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.973 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.973 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.973 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.973 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.974 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.974 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.974 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.975 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.975 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.975 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.976 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.976 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.976 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.976 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.977 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.977 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.977 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.978 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.978 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.978 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.979 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.979 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.979 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.980 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.980 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.980 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.981 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.981 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.981 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.981 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.982 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.982 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.982 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.983 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.983 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.983 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.984 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.984 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.984 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.984 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.985 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.985 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.985 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.986 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.986 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.986 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.987 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.987 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.987 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.987 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.988 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.988 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.988 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.989 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.989 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.989 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.989 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.990 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.990 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.991 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.991 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.991 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.991 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.992 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.992 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.993 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.993 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.993 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.994 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.994 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.995 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.995 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.995 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.996 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.996 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.997 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.997 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.997 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.998 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.998 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.999 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:18 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.999 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:18.999 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.000 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.000 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.001 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.001 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.001 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.001 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.002 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.002 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.002 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.002 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.003 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.003 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.003 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.004 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.004 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.004 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.004 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.005 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.005 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.005 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.005 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.006 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.006 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.006 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.007 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.007 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.007 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.008 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.008 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.008 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.008 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.009 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.009 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.009 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.009 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.010 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.010 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.010 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.011 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.011 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.011 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.011 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.012 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.012 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.012 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.012 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.013 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.013 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.013 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.014 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.014 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.014 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.014 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.014 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.015 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.015 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.015 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.015 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.015 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.016 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.016 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.016 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.016 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.016 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.017 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.017 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.017 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.017 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.017 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.018 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.018 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.018 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.018 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.018 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.019 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.019 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.019 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.019 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.019 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.020 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.020 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.020 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.020 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.020 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.021 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.021 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.021 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.021 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.021 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.022 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.022 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.022 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.022 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.023 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.023 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.023 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.023 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.023 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.024 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.024 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.024 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.024 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.025 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.025 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.025 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.025 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.026 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.026 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.026 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.026 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.026 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.027 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.027 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.027 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.027 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.027 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.028 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.028 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.028 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.028 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.028 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.028 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.029 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.029 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.029 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.029 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.030 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.030 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.030 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.030 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.030 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.031 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.031 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.031 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.031 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.031 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.032 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.032 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.032 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.032 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.032 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.033 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.033 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.033 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.033 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.033 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.034 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.034 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.034 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.034 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.034 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.035 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.035 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.035 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.035 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.035 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.036 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.036 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.036 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.036 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.037 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.037 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.037 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.037 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.037 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.037 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.037 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.038 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.038 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.038 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.038 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.038 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.038 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.038 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.039 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.039 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.039 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.039 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.039 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.039 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.039 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.040 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.040 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.040 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.040 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.040 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.040 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.040 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.041 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.041 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.041 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.041 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.041 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.041 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.041 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.042 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.042 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.042 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.042 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.042 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.042 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.042 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.043 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.043 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.043 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.043 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.043 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.043 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.043 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.044 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.044 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.044 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.044 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.044 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.044 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.044 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.045 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.045 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.045 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.045 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.045 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.046 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.046 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.046 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.046 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.046 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.046 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.046 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.047 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.047 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.047 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.047 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.047 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.047 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.047 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.047 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.048 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.048 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.048 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.048 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.048 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.048 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.048 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.049 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.049 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.049 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.049 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.049 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.049 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.050 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.050 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.050 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.050 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.050 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.050 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.050 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.051 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.051 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.051 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.051 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.051 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.051 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.051 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.052 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.052 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.052 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.052 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.052 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.052 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.052 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.053 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.053 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.053 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.053 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.053 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.053 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.053 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.054 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.054 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.054 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.054 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.054 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.054 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.054 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.055 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.055 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.055 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.055 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.055 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.055 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.055 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.056 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.056 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.056 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.056 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.056 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.056 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.056 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.057 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.057 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.057 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.057 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.057 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.057 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.057 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.058 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.058 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.058 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.058 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.058 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.058 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.058 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.059 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.059 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.059 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.cpu_mode               = custom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.059 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.059 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.cpu_models             = ['Nehalem'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.059 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.059 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.060 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.060 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.060 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.060 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.060 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.060 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.060 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.061 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.061 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.061 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.061 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.061 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.061 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.061 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.062 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.062 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.062 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.062 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.062 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.062 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.063 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.063 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.063 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.063 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.063 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.063 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.063 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.063 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.064 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.064 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.064 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.064 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.064 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.064 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.065 251881 WARNING oslo_config.cfg [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Nov 29 06:43:19 compute-0 nova_compute[251877]: live_migration_uri is deprecated for removal in favor of two other options that
Nov 29 06:43:19 compute-0 nova_compute[251877]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Nov 29 06:43:19 compute-0 nova_compute[251877]: and ``live_migration_inbound_addr`` respectively.
Nov 29 06:43:19 compute-0 nova_compute[251877]: ).  Its value may be silently ignored in the future.
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.065 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.065 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.065 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.065 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.065 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.066 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.066 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.066 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.066 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.066 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.066 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.067 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.067 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.067 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.067 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.067 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.067 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.067 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.068 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.rbd_secret_uuid        = 336ec58c-893b-528f-a0c1-6ed1196bc047 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.068 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.068 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.068 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.068 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.068 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.068 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.069 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.069 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.069 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.069 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.069 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.069 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.070 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.070 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.070 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.070 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.070 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.070 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.070 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.071 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.071 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.071 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.071 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.071 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.071 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.071 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.072 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.072 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.072 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.072 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.072 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.072 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.072 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.073 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.073 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.073 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.073 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.073 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.073 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.073 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.074 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.074 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.074 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.074 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.074 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.074 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.074 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.075 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.075 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.075 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.075 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.075 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.075 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.075 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.076 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.076 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.076 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.076 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.076 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.076 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.076 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.077 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.077 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.077 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.077 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.077 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.077 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.078 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.078 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.078 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.078 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.078 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.078 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.078 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.079 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.079 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.079 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.079 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.079 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.079 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.079 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.079 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.080 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.080 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.080 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.080 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.080 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.080 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.080 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.081 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.081 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.081 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.081 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.081 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.081 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.081 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.082 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.082 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.082 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.082 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.082 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.082 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.082 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.083 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.083 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.083 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.083 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.083 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.083 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.083 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.084 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.084 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.084 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.084 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.084 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.084 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.085 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.085 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.085 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.085 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.085 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.085 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.086 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.086 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.086 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.086 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.086 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.086 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.087 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.087 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.087 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.087 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.087 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.087 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.088 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.088 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.088 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.088 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.088 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.088 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.088 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.089 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.089 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.089 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.089 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.089 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.089 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.089 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.090 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.090 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.090 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.090 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.090 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.090 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.091 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.091 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.091 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.091 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.091 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.091 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.092 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.092 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.092 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.092 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.092 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.092 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.093 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.093 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.093 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.093 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.093 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.093 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.093 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.093 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.094 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.094 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.094 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.094 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.094 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.094 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.095 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.095 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.095 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.095 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.095 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.095 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.095 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.096 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.096 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.096 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.096 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.096 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.096 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.096 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.097 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.097 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.097 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.097 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.097 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.097 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.098 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.098 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.098 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.098 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.098 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.098 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.099 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.099 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.099 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.099 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.099 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.099 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.099 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.100 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.100 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.100 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.100 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.100 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.100 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.100 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.100 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.101 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.101 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.101 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.101 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.101 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.101 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.101 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.102 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.102 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.102 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.102 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.102 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.103 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.103 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.103 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.103 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.103 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.104 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.104 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.104 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.104 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.104 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.105 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.105 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.105 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.105 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.105 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.106 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.106 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.106 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.106 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.106 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.106 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.107 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.107 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.107 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.107 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.107 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.108 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.108 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.108 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.108 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.109 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.109 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.109 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.109 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.109 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.110 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.110 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.110 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.110 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.110 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.111 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.111 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.111 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.111 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.111 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.112 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.112 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.112 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.112 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.112 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.113 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.113 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.113 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.113 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.113 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.114 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.114 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.114 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.114 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.114 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.114 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.115 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.115 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.115 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.115 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.115 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.116 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.116 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.116 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.116 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.116 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.117 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.117 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.117 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.117 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.117 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.117 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.118 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.118 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.118 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.118 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.118 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.119 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.119 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.119 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.119 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.119 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.120 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.120 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.120 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.120 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.120 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.121 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.121 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.121 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.121 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.121 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.121 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.122 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.122 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.122 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.122 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.122 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.123 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.123 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.123 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.123 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.123 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.124 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.124 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.124 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.124 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.124 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.125 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.125 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.125 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.125 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.125 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.125 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.126 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.126 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.126 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.126 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.126 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.127 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.127 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.127 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.127 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.127 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.127 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.128 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.128 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.128 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.128 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.128 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.129 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.129 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.129 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.129 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.129 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.130 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.130 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.130 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.130 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.130 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.130 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.131 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.131 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.131 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.131 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.132 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.132 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.132 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.132 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.132 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.132 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.133 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.133 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.133 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.133 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.133 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.133 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.133 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.134 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.134 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.134 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.134 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.134 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.134 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.134 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.134 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.135 251881 DEBUG oslo_service.service [None req-916d8ad5-9df4-44c3-975c-697348153336 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.136 251881 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Nov 29 06:43:19 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:43:19 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:43:19 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:43:19.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:43:19 compute-0 ceph-mon[74654]: pgmap v926: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.930 251881 INFO nova.virt.node [None req-e073a6d6-a095-4d41-95db-624faa93ff07 - - - - - -] Determined node identity 36ed0248-8d04-4532-95bb-daab89f12202 from /var/lib/nova/compute_id
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.931 251881 DEBUG nova.virt.libvirt.host [None req-e073a6d6-a095-4d41-95db-624faa93ff07 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.931 251881 DEBUG nova.virt.libvirt.host [None req-e073a6d6-a095-4d41-95db-624faa93ff07 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.931 251881 DEBUG nova.virt.libvirt.host [None req-e073a6d6-a095-4d41-95db-624faa93ff07 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.932 251881 DEBUG nova.virt.libvirt.host [None req-e073a6d6-a095-4d41-95db-624faa93ff07 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.944 251881 DEBUG nova.virt.libvirt.host [None req-e073a6d6-a095-4d41-95db-624faa93ff07 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f4540490f10> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.947 251881 DEBUG nova.virt.libvirt.host [None req-e073a6d6-a095-4d41-95db-624faa93ff07 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f4540490f10> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.951 251881 INFO nova.virt.libvirt.driver [None req-e073a6d6-a095-4d41-95db-624faa93ff07 - - - - - -] Connection event '1' reason 'None'
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.960 251881 INFO nova.virt.libvirt.host [None req-e073a6d6-a095-4d41-95db-624faa93ff07 - - - - - -] Libvirt host capabilities <capabilities>
Nov 29 06:43:19 compute-0 nova_compute[251877]: 
Nov 29 06:43:19 compute-0 nova_compute[251877]:   <host>
Nov 29 06:43:19 compute-0 nova_compute[251877]:     <uuid>c87c7517-e569-4e42-8023-b11f25bc4e0c</uuid>
Nov 29 06:43:19 compute-0 nova_compute[251877]:     <cpu>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <arch>x86_64</arch>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <model>EPYC-Rome-v4</model>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <vendor>AMD</vendor>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <microcode version='16777317'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <signature family='23' model='49' stepping='0'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <maxphysaddr mode='emulate' bits='40'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <feature name='x2apic'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <feature name='tsc-deadline'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <feature name='osxsave'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <feature name='hypervisor'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <feature name='tsc_adjust'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <feature name='spec-ctrl'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <feature name='stibp'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <feature name='arch-capabilities'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <feature name='ssbd'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <feature name='cmp_legacy'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <feature name='topoext'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <feature name='virt-ssbd'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <feature name='lbrv'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <feature name='tsc-scale'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <feature name='vmcb-clean'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <feature name='pause-filter'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <feature name='pfthreshold'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <feature name='svme-addr-chk'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <feature name='rdctl-no'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <feature name='skip-l1dfl-vmentry'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <feature name='mds-no'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <feature name='pschange-mc-no'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <pages unit='KiB' size='4'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <pages unit='KiB' size='2048'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <pages unit='KiB' size='1048576'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:     </cpu>
Nov 29 06:43:19 compute-0 nova_compute[251877]:     <power_management>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <suspend_mem/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:     </power_management>
Nov 29 06:43:19 compute-0 nova_compute[251877]:     <iommu support='no'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:     <migration_features>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <live/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <uri_transports>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <uri_transport>tcp</uri_transport>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <uri_transport>rdma</uri_transport>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       </uri_transports>
Nov 29 06:43:19 compute-0 nova_compute[251877]:     </migration_features>
Nov 29 06:43:19 compute-0 nova_compute[251877]:     <topology>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <cells num='1'>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <cell id='0'>
Nov 29 06:43:19 compute-0 nova_compute[251877]:           <memory unit='KiB'>7864324</memory>
Nov 29 06:43:19 compute-0 nova_compute[251877]:           <pages unit='KiB' size='4'>1966081</pages>
Nov 29 06:43:19 compute-0 nova_compute[251877]:           <pages unit='KiB' size='2048'>0</pages>
Nov 29 06:43:19 compute-0 nova_compute[251877]:           <pages unit='KiB' size='1048576'>0</pages>
Nov 29 06:43:19 compute-0 nova_compute[251877]:           <distances>
Nov 29 06:43:19 compute-0 nova_compute[251877]:             <sibling id='0' value='10'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:           </distances>
Nov 29 06:43:19 compute-0 nova_compute[251877]:           <cpus num='8'>
Nov 29 06:43:19 compute-0 nova_compute[251877]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:           </cpus>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         </cell>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       </cells>
Nov 29 06:43:19 compute-0 nova_compute[251877]:     </topology>
Nov 29 06:43:19 compute-0 nova_compute[251877]:     <cache>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:     </cache>
Nov 29 06:43:19 compute-0 nova_compute[251877]:     <secmodel>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <model>selinux</model>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <doi>0</doi>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Nov 29 06:43:19 compute-0 nova_compute[251877]:     </secmodel>
Nov 29 06:43:19 compute-0 nova_compute[251877]:     <secmodel>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <model>dac</model>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <doi>0</doi>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <baselabel type='kvm'>+107:+107</baselabel>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <baselabel type='qemu'>+107:+107</baselabel>
Nov 29 06:43:19 compute-0 nova_compute[251877]:     </secmodel>
Nov 29 06:43:19 compute-0 nova_compute[251877]:   </host>
Nov 29 06:43:19 compute-0 nova_compute[251877]: 
Nov 29 06:43:19 compute-0 nova_compute[251877]:   <guest>
Nov 29 06:43:19 compute-0 nova_compute[251877]:     <os_type>hvm</os_type>
Nov 29 06:43:19 compute-0 nova_compute[251877]:     <arch name='i686'>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <wordsize>32</wordsize>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <domain type='qemu'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <domain type='kvm'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:     </arch>
Nov 29 06:43:19 compute-0 nova_compute[251877]:     <features>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <pae/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <nonpae/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <acpi default='on' toggle='yes'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <apic default='on' toggle='no'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <cpuselection/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <deviceboot/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <disksnapshot default='on' toggle='no'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <externalSnapshot/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:     </features>
Nov 29 06:43:19 compute-0 nova_compute[251877]:   </guest>
Nov 29 06:43:19 compute-0 nova_compute[251877]: 
Nov 29 06:43:19 compute-0 nova_compute[251877]:   <guest>
Nov 29 06:43:19 compute-0 nova_compute[251877]:     <os_type>hvm</os_type>
Nov 29 06:43:19 compute-0 nova_compute[251877]:     <arch name='x86_64'>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <wordsize>64</wordsize>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <domain type='qemu'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <domain type='kvm'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:     </arch>
Nov 29 06:43:19 compute-0 nova_compute[251877]:     <features>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <acpi default='on' toggle='yes'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <apic default='on' toggle='no'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <cpuselection/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <deviceboot/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <disksnapshot default='on' toggle='no'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <externalSnapshot/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:     </features>
Nov 29 06:43:19 compute-0 nova_compute[251877]:   </guest>
Nov 29 06:43:19 compute-0 nova_compute[251877]: 
Nov 29 06:43:19 compute-0 nova_compute[251877]: </capabilities>
Nov 29 06:43:19 compute-0 nova_compute[251877]: 
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.967 251881 DEBUG nova.virt.libvirt.host [None req-e073a6d6-a095-4d41-95db-624faa93ff07 - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Nov 29 06:43:19 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.971 251881 DEBUG nova.virt.libvirt.host [None req-e073a6d6-a095-4d41-95db-624faa93ff07 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Nov 29 06:43:19 compute-0 nova_compute[251877]: <domainCapabilities>
Nov 29 06:43:19 compute-0 nova_compute[251877]:   <path>/usr/libexec/qemu-kvm</path>
Nov 29 06:43:19 compute-0 nova_compute[251877]:   <domain>kvm</domain>
Nov 29 06:43:19 compute-0 nova_compute[251877]:   <machine>pc-q35-rhel9.8.0</machine>
Nov 29 06:43:19 compute-0 nova_compute[251877]:   <arch>i686</arch>
Nov 29 06:43:19 compute-0 nova_compute[251877]:   <vcpu max='4096'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:   <iothreads supported='yes'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:   <os supported='yes'>
Nov 29 06:43:19 compute-0 nova_compute[251877]:     <enum name='firmware'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:     <loader supported='yes'>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <enum name='type'>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <value>rom</value>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <value>pflash</value>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       </enum>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <enum name='readonly'>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <value>yes</value>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <value>no</value>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       </enum>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <enum name='secure'>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <value>no</value>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       </enum>
Nov 29 06:43:19 compute-0 nova_compute[251877]:     </loader>
Nov 29 06:43:19 compute-0 nova_compute[251877]:   </os>
Nov 29 06:43:19 compute-0 nova_compute[251877]:   <cpu>
Nov 29 06:43:19 compute-0 nova_compute[251877]:     <mode name='host-passthrough' supported='yes'>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <enum name='hostPassthroughMigratable'>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <value>on</value>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <value>off</value>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       </enum>
Nov 29 06:43:19 compute-0 nova_compute[251877]:     </mode>
Nov 29 06:43:19 compute-0 nova_compute[251877]:     <mode name='maximum' supported='yes'>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <enum name='maximumMigratable'>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <value>on</value>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <value>off</value>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       </enum>
Nov 29 06:43:19 compute-0 nova_compute[251877]:     </mode>
Nov 29 06:43:19 compute-0 nova_compute[251877]:     <mode name='host-model' supported='yes'>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <model fallback='forbid'>EPYC-Rome</model>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <vendor>AMD</vendor>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <maxphysaddr mode='passthrough' limit='40'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <feature policy='require' name='x2apic'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <feature policy='require' name='tsc-deadline'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <feature policy='require' name='hypervisor'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <feature policy='require' name='tsc_adjust'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <feature policy='require' name='spec-ctrl'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <feature policy='require' name='stibp'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <feature policy='require' name='ssbd'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <feature policy='require' name='cmp_legacy'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <feature policy='require' name='overflow-recov'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <feature policy='require' name='succor'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <feature policy='require' name='ibrs'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <feature policy='require' name='amd-ssbd'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <feature policy='require' name='virt-ssbd'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <feature policy='require' name='lbrv'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <feature policy='require' name='tsc-scale'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <feature policy='require' name='vmcb-clean'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <feature policy='require' name='flushbyasid'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <feature policy='require' name='pause-filter'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <feature policy='require' name='pfthreshold'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <feature policy='require' name='svme-addr-chk'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <feature policy='disable' name='xsaves'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:     </mode>
Nov 29 06:43:19 compute-0 nova_compute[251877]:     <mode name='custom' supported='yes'>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <blockers model='Broadwell'>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='hle'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='rtm'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <blockers model='Broadwell-IBRS'>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='hle'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='rtm'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <blockers model='Broadwell-noTSX'>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <blockers model='Broadwell-noTSX-IBRS'>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <blockers model='Broadwell-v1'>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='hle'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='rtm'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <blockers model='Broadwell-v2'>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <blockers model='Broadwell-v3'>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='hle'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='rtm'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <blockers model='Broadwell-v4'>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <blockers model='Cascadelake-Server'>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512vnni'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='hle'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='rtm'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512vnni'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='ibrs-all'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <blockers model='Cascadelake-Server-v1'>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512vnni'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='hle'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='rtm'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <blockers model='Cascadelake-Server-v2'>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512vnni'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='hle'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='ibrs-all'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='rtm'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <blockers model='Cascadelake-Server-v3'>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512vnni'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='ibrs-all'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <blockers model='Cascadelake-Server-v4'>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512vnni'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='ibrs-all'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <blockers model='Cascadelake-Server-v5'>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512vnni'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='ibrs-all'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='xsaves'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <blockers model='Cooperlake'>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512-bf16'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512vnni'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='hle'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='ibrs-all'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='rtm'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='taa-no'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <blockers model='Cooperlake-v1'>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512-bf16'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512vnni'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='hle'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='ibrs-all'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='rtm'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='taa-no'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <blockers model='Cooperlake-v2'>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512-bf16'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512vnni'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='hle'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='ibrs-all'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='rtm'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='taa-no'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='xsaves'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <blockers model='Denverton'>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='mpx'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <blockers model='Denverton-v1'>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='mpx'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <blockers model='Denverton-v2'>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <blockers model='Denverton-v3'>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='xsaves'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <blockers model='Dhyana-v2'>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='xsaves'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <blockers model='EPYC-Genoa'>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='amd-psfd'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='auto-ibrs'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512-bf16'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512bitalg'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512ifma'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512vbmi'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512vnni'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='fsrm'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='gfni'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='la57'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='no-nested-data-bp'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='null-sel-clr-base'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='stibp-always-on'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='vaes'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='xsaves'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <blockers model='EPYC-Genoa-v1'>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='amd-psfd'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='auto-ibrs'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512-bf16'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512bitalg'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512ifma'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512vbmi'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512vnni'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='fsrm'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='gfni'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='la57'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='no-nested-data-bp'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='null-sel-clr-base'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='stibp-always-on'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='vaes'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='xsaves'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <blockers model='EPYC-Milan'>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='fsrm'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='xsaves'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <blockers model='EPYC-Milan-v1'>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='fsrm'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='xsaves'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <blockers model='EPYC-Milan-v2'>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='amd-psfd'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='fsrm'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='no-nested-data-bp'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='null-sel-clr-base'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='stibp-always-on'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='vaes'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='xsaves'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <blockers model='EPYC-Rome'>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='xsaves'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <blockers model='EPYC-Rome-v1'>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='xsaves'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <blockers model='EPYC-Rome-v2'>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='xsaves'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <blockers model='EPYC-Rome-v3'>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='xsaves'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <blockers model='EPYC-v3'>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='xsaves'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <blockers model='EPYC-v4'>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='xsaves'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <blockers model='GraniteRapids'>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='amx-bf16'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='amx-fp16'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='amx-int8'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='amx-tile'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx-vnni'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512-bf16'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512-fp16'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512bitalg'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512ifma'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512vbmi'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512vnni'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='bus-lock-detect'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='fbsdp-no'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='fsrc'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='fsrm'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='fsrs'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='fzrm'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='gfni'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='hle'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='ibrs-all'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='la57'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='mcdt-no'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='pbrsb-no'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='prefetchiti'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='psdp-no'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='rtm'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='sbdr-ssdp-no'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='serialize'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='taa-no'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='tsx-ldtrk'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='vaes'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='xfd'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='xsaves'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <blockers model='GraniteRapids-v1'>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='amx-bf16'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='amx-fp16'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='amx-int8'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='amx-tile'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx-vnni'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512-bf16'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512-fp16'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512bitalg'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512ifma'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512vbmi'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512vnni'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='bus-lock-detect'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='fbsdp-no'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='fsrc'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='fsrm'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='fsrs'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='fzrm'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='gfni'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='hle'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='ibrs-all'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='la57'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='mcdt-no'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='pbrsb-no'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='prefetchiti'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='psdp-no'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='rtm'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='sbdr-ssdp-no'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='serialize'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='taa-no'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='tsx-ldtrk'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='vaes'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='xfd'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='xsaves'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <blockers model='GraniteRapids-v2'>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='amx-bf16'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='amx-fp16'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='amx-int8'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='amx-tile'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx-vnni'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx10'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx10-128'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx10-256'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx10-512'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512-bf16'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512-fp16'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512bitalg'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512ifma'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512vbmi'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512vnni'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='bus-lock-detect'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='cldemote'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='fbsdp-no'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='fsrc'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='fsrm'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='fsrs'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='fzrm'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='gfni'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='hle'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='ibrs-all'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='la57'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='mcdt-no'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='movdir64b'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='movdiri'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='pbrsb-no'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='prefetchiti'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='psdp-no'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='rtm'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='sbdr-ssdp-no'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='serialize'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='ss'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='taa-no'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='tsx-ldtrk'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='vaes'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='xfd'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='xsaves'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <blockers model='Haswell'>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='hle'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='rtm'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <blockers model='Haswell-IBRS'>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='hle'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='rtm'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <blockers model='Haswell-noTSX'>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <blockers model='Haswell-noTSX-IBRS'>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <blockers model='Haswell-v1'>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='hle'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='rtm'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <blockers model='Haswell-v2'>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <blockers model='Haswell-v3'>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='hle'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='rtm'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <blockers model='Haswell-v4'>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <blockers model='Icelake-Server'>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512bitalg'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512vbmi'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512vnni'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='gfni'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='hle'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='la57'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='rtm'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='vaes'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <blockers model='Icelake-Server-noTSX'>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512bitalg'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512vbmi'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512vnni'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='gfni'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='la57'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='vaes'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <blockers model='Icelake-Server-v1'>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512bitalg'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512vbmi'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512vnni'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='gfni'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='hle'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='la57'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='rtm'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='vaes'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <blockers model='Icelake-Server-v2'>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512bitalg'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512vbmi'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512vnni'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='gfni'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='la57'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='vaes'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <blockers model='Icelake-Server-v3'>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512bitalg'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512vbmi'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512vnni'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='gfni'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='ibrs-all'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='la57'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='taa-no'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='vaes'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <blockers model='Icelake-Server-v4'>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512bitalg'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512ifma'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512vbmi'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512vnni'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='fsrm'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='gfni'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='ibrs-all'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='la57'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='taa-no'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='vaes'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <blockers model='Icelake-Server-v5'>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512bitalg'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512ifma'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512vbmi'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512vnni'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='fsrm'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='gfni'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='ibrs-all'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='la57'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='taa-no'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='vaes'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='xsaves'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <blockers model='Icelake-Server-v6'>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512bitalg'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512ifma'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512vbmi'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512vnni'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='fsrm'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='gfni'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='ibrs-all'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='la57'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='taa-no'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='vaes'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='xsaves'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <blockers model='Icelake-Server-v7'>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512bitalg'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512ifma'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512vbmi'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512vnni'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='fsrm'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='gfni'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='hle'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='ibrs-all'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='la57'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='rtm'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='taa-no'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='vaes'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='xsaves'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <blockers model='IvyBridge'>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <blockers model='IvyBridge-IBRS'>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <blockers model='IvyBridge-v1'>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <blockers model='IvyBridge-v2'>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <blockers model='KnightsMill'>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512-4fmaps'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512-4vnniw'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512er'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512pf'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='ss'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <blockers model='KnightsMill-v1'>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512-4fmaps'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512-4vnniw'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512er'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512pf'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='ss'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <blockers model='Opteron_G4'>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='fma4'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='xop'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <blockers model='Opteron_G4-v1'>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='fma4'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='xop'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <blockers model='Opteron_G5'>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='fma4'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='tbm'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='xop'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <blockers model='Opteron_G5-v1'>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='fma4'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='tbm'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='xop'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <blockers model='SapphireRapids'>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='amx-bf16'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='amx-int8'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='amx-tile'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx-vnni'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512-bf16'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512-fp16'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512bitalg'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512ifma'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512vbmi'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512vnni'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='bus-lock-detect'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='fsrc'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='fsrm'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='fsrs'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='fzrm'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='gfni'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='hle'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='ibrs-all'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='la57'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='rtm'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='serialize'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='taa-no'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='tsx-ldtrk'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='vaes'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='xfd'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='xsaves'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <blockers model='SapphireRapids-v1'>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='amx-bf16'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='amx-int8'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='amx-tile'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx-vnni'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512-bf16'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512-fp16'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512bitalg'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512ifma'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512vbmi'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512vnni'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='bus-lock-detect'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='fsrc'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='fsrm'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='fsrs'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='fzrm'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='gfni'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='hle'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='ibrs-all'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='la57'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='rtm'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='serialize'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='taa-no'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='tsx-ldtrk'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='vaes'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='xfd'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='xsaves'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <blockers model='SapphireRapids-v2'>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='amx-bf16'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='amx-int8'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='amx-tile'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx-vnni'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512-bf16'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512-fp16'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512bitalg'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512ifma'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512vbmi'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512vnni'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='bus-lock-detect'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='fbsdp-no'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='fsrc'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='fsrm'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='fsrs'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='fzrm'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='gfni'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='hle'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='ibrs-all'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='la57'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='psdp-no'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='rtm'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='sbdr-ssdp-no'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='serialize'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='taa-no'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='tsx-ldtrk'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='vaes'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='xfd'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='xsaves'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <blockers model='SapphireRapids-v3'>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='amx-bf16'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='amx-int8'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='amx-tile'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx-vnni'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512-bf16'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512-fp16'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512bitalg'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512ifma'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512vbmi'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512vnni'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='bus-lock-detect'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='cldemote'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='fbsdp-no'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='fsrc'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='fsrm'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='fsrs'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='fzrm'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='gfni'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='hle'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='ibrs-all'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='la57'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='movdir64b'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='movdiri'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='psdp-no'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='rtm'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='sbdr-ssdp-no'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='serialize'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='ss'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='taa-no'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='tsx-ldtrk'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='vaes'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='xfd'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='xsaves'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <blockers model='SierraForest'>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx-ifma'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx-ne-convert'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx-vnni'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx-vnni-int8'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='bus-lock-detect'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='cmpccxadd'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='fbsdp-no'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='fsrm'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='fsrs'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='gfni'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='ibrs-all'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='mcdt-no'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='pbrsb-no'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='psdp-no'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='sbdr-ssdp-no'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='serialize'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='vaes'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='xsaves'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <blockers model='SierraForest-v1'>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx-ifma'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx-ne-convert'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx-vnni'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx-vnni-int8'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='bus-lock-detect'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='cmpccxadd'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='fbsdp-no'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='fsrm'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='fsrs'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='gfni'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='ibrs-all'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='mcdt-no'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='pbrsb-no'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='psdp-no'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='sbdr-ssdp-no'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='serialize'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='vaes'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='xsaves'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <blockers model='Skylake-Client'>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='hle'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='rtm'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <blockers model='Skylake-Client-IBRS'>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='hle'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='rtm'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <blockers model='Skylake-Client-v1'>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='hle'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='rtm'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <blockers model='Skylake-Client-v2'>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='hle'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='rtm'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <blockers model='Skylake-Client-v3'>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <blockers model='Skylake-Client-v4'>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='xsaves'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <blockers model='Skylake-Server'>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='hle'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='rtm'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <blockers model='Skylake-Server-IBRS'>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='hle'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='rtm'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:19 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Skylake-Server-v1'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='hle'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='rtm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Skylake-Server-v2'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='hle'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='rtm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Skylake-Server-v3'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Skylake-Server-v4'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Skylake-Server-v5'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='xsaves'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Snowridge'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='cldemote'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='core-capability'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='gfni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='movdir64b'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='movdiri'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='mpx'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='split-lock-detect'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Snowridge-v1'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='cldemote'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='core-capability'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='gfni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='movdir64b'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='movdiri'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='mpx'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='split-lock-detect'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Snowridge-v2'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='cldemote'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='core-capability'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='gfni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='movdir64b'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='movdiri'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='split-lock-detect'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Snowridge-v3'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='cldemote'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='core-capability'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='gfni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='movdir64b'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='movdiri'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='split-lock-detect'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='xsaves'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Snowridge-v4'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='cldemote'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='gfni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='movdir64b'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='movdiri'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='xsaves'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='athlon'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='3dnow'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='3dnowext'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='athlon-v1'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='3dnow'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='3dnowext'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='core2duo'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='ss'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='core2duo-v1'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='ss'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='coreduo'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='ss'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='coreduo-v1'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='ss'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='n270'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='ss'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='n270-v1'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='ss'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='phenom'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='3dnow'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='3dnowext'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='phenom-v1'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='3dnow'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='3dnowext'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     </mode>
Nov 29 06:43:20 compute-0 nova_compute[251877]:   </cpu>
Nov 29 06:43:20 compute-0 nova_compute[251877]:   <memoryBacking supported='yes'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     <enum name='sourceType'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <value>file</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <value>anonymous</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <value>memfd</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     </enum>
Nov 29 06:43:20 compute-0 nova_compute[251877]:   </memoryBacking>
Nov 29 06:43:20 compute-0 nova_compute[251877]:   <devices>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     <disk supported='yes'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <enum name='diskDevice'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>disk</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>cdrom</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>floppy</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>lun</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </enum>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <enum name='bus'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>fdc</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>scsi</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>virtio</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>usb</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>sata</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </enum>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <enum name='model'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>virtio</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>virtio-transitional</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>virtio-non-transitional</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </enum>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     </disk>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     <graphics supported='yes'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <enum name='type'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>vnc</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>egl-headless</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>dbus</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </enum>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     </graphics>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     <video supported='yes'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <enum name='modelType'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>vga</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>cirrus</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>virtio</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>none</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>bochs</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>ramfb</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </enum>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     </video>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     <hostdev supported='yes'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <enum name='mode'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>subsystem</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </enum>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <enum name='startupPolicy'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>default</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>mandatory</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>requisite</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>optional</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </enum>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <enum name='subsysType'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>usb</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>pci</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>scsi</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </enum>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <enum name='capsType'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <enum name='pciBackend'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     </hostdev>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     <rng supported='yes'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <enum name='model'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>virtio</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>virtio-transitional</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>virtio-non-transitional</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </enum>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <enum name='backendModel'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>random</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>egd</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>builtin</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </enum>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     </rng>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     <filesystem supported='yes'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <enum name='driverType'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>path</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>handle</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>virtiofs</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </enum>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     </filesystem>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     <tpm supported='yes'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <enum name='model'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>tpm-tis</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>tpm-crb</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </enum>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <enum name='backendModel'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>emulator</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>external</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </enum>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <enum name='backendVersion'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>2.0</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </enum>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     </tpm>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     <redirdev supported='yes'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <enum name='bus'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>usb</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </enum>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     </redirdev>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     <channel supported='yes'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <enum name='type'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>pty</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>unix</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </enum>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     </channel>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     <crypto supported='yes'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <enum name='model'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <enum name='type'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>qemu</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </enum>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <enum name='backendModel'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>builtin</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </enum>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     </crypto>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     <interface supported='yes'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <enum name='backendType'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>default</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>passt</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </enum>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     </interface>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     <panic supported='yes'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <enum name='model'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>isa</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>hyperv</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </enum>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     </panic>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     <console supported='yes'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <enum name='type'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>null</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>vc</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>pty</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>dev</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>file</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>pipe</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>stdio</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>udp</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>tcp</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>unix</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>qemu-vdagent</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>dbus</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </enum>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     </console>
Nov 29 06:43:20 compute-0 nova_compute[251877]:   </devices>
Nov 29 06:43:20 compute-0 nova_compute[251877]:   <features>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     <gic supported='no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     <vmcoreinfo supported='yes'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     <genid supported='yes'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     <backingStoreInput supported='yes'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     <backup supported='yes'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     <async-teardown supported='yes'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     <ps2 supported='yes'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     <sev supported='no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     <sgx supported='no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     <hyperv supported='yes'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <enum name='features'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>relaxed</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>vapic</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>spinlocks</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>vpindex</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>runtime</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>synic</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>stimer</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>reset</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>vendor_id</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>frequencies</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>reenlightenment</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>tlbflush</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>ipi</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>avic</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>emsr_bitmap</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>xmm_input</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </enum>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <defaults>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <spinlocks>4095</spinlocks>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <stimer_direct>on</stimer_direct>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <tlbflush_direct>on</tlbflush_direct>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <tlbflush_extended>on</tlbflush_extended>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </defaults>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     </hyperv>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     <launchSecurity supported='yes'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <enum name='sectype'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>tdx</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </enum>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     </launchSecurity>
Nov 29 06:43:20 compute-0 nova_compute[251877]:   </features>
Nov 29 06:43:20 compute-0 nova_compute[251877]: </domainCapabilities>
Nov 29 06:43:20 compute-0 nova_compute[251877]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 29 06:43:20 compute-0 nova_compute[251877]: 2025-11-29 06:43:19.977 251881 DEBUG nova.virt.libvirt.host [None req-e073a6d6-a095-4d41-95db-624faa93ff07 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Nov 29 06:43:20 compute-0 nova_compute[251877]: <domainCapabilities>
Nov 29 06:43:20 compute-0 nova_compute[251877]:   <path>/usr/libexec/qemu-kvm</path>
Nov 29 06:43:20 compute-0 nova_compute[251877]:   <domain>kvm</domain>
Nov 29 06:43:20 compute-0 nova_compute[251877]:   <machine>pc-i440fx-rhel7.6.0</machine>
Nov 29 06:43:20 compute-0 nova_compute[251877]:   <arch>i686</arch>
Nov 29 06:43:20 compute-0 nova_compute[251877]:   <vcpu max='240'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:   <iothreads supported='yes'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:   <os supported='yes'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     <enum name='firmware'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     <loader supported='yes'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <enum name='type'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>rom</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>pflash</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </enum>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <enum name='readonly'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>yes</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>no</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </enum>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <enum name='secure'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>no</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </enum>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     </loader>
Nov 29 06:43:20 compute-0 nova_compute[251877]:   </os>
Nov 29 06:43:20 compute-0 nova_compute[251877]:   <cpu>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     <mode name='host-passthrough' supported='yes'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <enum name='hostPassthroughMigratable'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>on</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>off</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </enum>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     </mode>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     <mode name='maximum' supported='yes'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <enum name='maximumMigratable'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>on</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>off</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </enum>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     </mode>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     <mode name='host-model' supported='yes'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model fallback='forbid'>EPYC-Rome</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <vendor>AMD</vendor>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <maxphysaddr mode='passthrough' limit='40'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <feature policy='require' name='x2apic'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <feature policy='require' name='tsc-deadline'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <feature policy='require' name='hypervisor'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <feature policy='require' name='tsc_adjust'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <feature policy='require' name='spec-ctrl'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <feature policy='require' name='stibp'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <feature policy='require' name='ssbd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <feature policy='require' name='cmp_legacy'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <feature policy='require' name='overflow-recov'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <feature policy='require' name='succor'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <feature policy='require' name='ibrs'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <feature policy='require' name='amd-ssbd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <feature policy='require' name='virt-ssbd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <feature policy='require' name='lbrv'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <feature policy='require' name='tsc-scale'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <feature policy='require' name='vmcb-clean'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <feature policy='require' name='flushbyasid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <feature policy='require' name='pause-filter'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <feature policy='require' name='pfthreshold'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <feature policy='require' name='svme-addr-chk'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <feature policy='disable' name='xsaves'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     </mode>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     <mode name='custom' supported='yes'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Broadwell'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='hle'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='rtm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Broadwell-IBRS'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='hle'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='rtm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Broadwell-noTSX'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Broadwell-noTSX-IBRS'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Broadwell-v1'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='hle'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='rtm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Broadwell-v2'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Broadwell-v3'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='hle'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='rtm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Broadwell-v4'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Cascadelake-Server'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vnni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='hle'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='rtm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vnni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='ibrs-all'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Cascadelake-Server-v1'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vnni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='hle'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='rtm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Cascadelake-Server-v2'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vnni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='hle'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='ibrs-all'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='rtm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Cascadelake-Server-v3'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vnni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='ibrs-all'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Cascadelake-Server-v4'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vnni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='ibrs-all'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Cascadelake-Server-v5'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vnni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='ibrs-all'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='xsaves'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Cooperlake'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512-bf16'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vnni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='hle'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='ibrs-all'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='rtm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='taa-no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Cooperlake-v1'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512-bf16'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vnni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='hle'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='ibrs-all'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='rtm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='taa-no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Cooperlake-v2'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512-bf16'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vnni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='hle'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='ibrs-all'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='rtm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='taa-no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='xsaves'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Denverton'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='mpx'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Denverton-v1'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='mpx'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Denverton-v2'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Denverton-v3'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='xsaves'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Dhyana-v2'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='xsaves'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='EPYC-Genoa'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='amd-psfd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='auto-ibrs'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512-bf16'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bitalg'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512ifma'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vbmi'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vnni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fsrm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='gfni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='la57'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='no-nested-data-bp'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='null-sel-clr-base'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='stibp-always-on'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='vaes'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='xsaves'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='EPYC-Genoa-v1'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='amd-psfd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='auto-ibrs'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512-bf16'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bitalg'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512ifma'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vbmi'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vnni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fsrm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='gfni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='la57'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='no-nested-data-bp'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='null-sel-clr-base'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='stibp-always-on'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='vaes'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='xsaves'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='EPYC-Milan'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fsrm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='xsaves'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='EPYC-Milan-v1'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fsrm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='xsaves'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='EPYC-Milan-v2'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='amd-psfd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fsrm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='no-nested-data-bp'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='null-sel-clr-base'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='stibp-always-on'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='vaes'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='xsaves'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='EPYC-Rome'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='xsaves'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='EPYC-Rome-v1'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='xsaves'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='EPYC-Rome-v2'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='xsaves'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='EPYC-Rome-v3'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='xsaves'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='EPYC-v3'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='xsaves'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='EPYC-v4'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='xsaves'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='GraniteRapids'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='amx-bf16'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='amx-fp16'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='amx-int8'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='amx-tile'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx-vnni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512-bf16'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512-fp16'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bitalg'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512ifma'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vbmi'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vnni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='bus-lock-detect'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fbsdp-no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fsrc'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fsrm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fsrs'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fzrm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='gfni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='hle'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='ibrs-all'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='la57'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='mcdt-no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pbrsb-no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='prefetchiti'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='psdp-no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='rtm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='sbdr-ssdp-no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='serialize'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='taa-no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='tsx-ldtrk'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='vaes'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='xfd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='xsaves'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='GraniteRapids-v1'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='amx-bf16'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='amx-fp16'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='amx-int8'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='amx-tile'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx-vnni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512-bf16'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512-fp16'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bitalg'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512ifma'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vbmi'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vnni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='bus-lock-detect'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fbsdp-no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fsrc'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fsrm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fsrs'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fzrm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='gfni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='hle'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='ibrs-all'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='la57'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='mcdt-no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pbrsb-no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='prefetchiti'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='psdp-no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='rtm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='sbdr-ssdp-no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='serialize'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='taa-no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='tsx-ldtrk'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='vaes'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='xfd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='xsaves'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='GraniteRapids-v2'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='amx-bf16'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='amx-fp16'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='amx-int8'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='amx-tile'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx-vnni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx10'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx10-128'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx10-256'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx10-512'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512-bf16'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512-fp16'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bitalg'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512ifma'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vbmi'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vnni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='bus-lock-detect'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='cldemote'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fbsdp-no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fsrc'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fsrm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fsrs'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fzrm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='gfni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='hle'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='ibrs-all'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='la57'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='mcdt-no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='movdir64b'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='movdiri'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pbrsb-no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='prefetchiti'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='psdp-no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='rtm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='sbdr-ssdp-no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='serialize'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='ss'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='taa-no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='tsx-ldtrk'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='vaes'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='xfd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='xsaves'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Haswell'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='hle'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='rtm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Haswell-IBRS'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='hle'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='rtm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Haswell-noTSX'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Haswell-noTSX-IBRS'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Haswell-v1'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='hle'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='rtm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Haswell-v2'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Haswell-v3'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='hle'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='rtm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Haswell-v4'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Icelake-Server'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bitalg'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vbmi'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vnni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='gfni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='hle'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='la57'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='rtm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='vaes'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Icelake-Server-noTSX'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bitalg'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vbmi'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vnni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='gfni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='la57'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='vaes'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Icelake-Server-v1'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bitalg'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vbmi'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vnni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='gfni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='hle'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='la57'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='rtm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='vaes'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Icelake-Server-v2'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bitalg'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vbmi'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vnni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='gfni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='la57'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='vaes'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Icelake-Server-v3'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bitalg'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vbmi'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vnni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='gfni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='ibrs-all'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='la57'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='taa-no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='vaes'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Icelake-Server-v4'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bitalg'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512ifma'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vbmi'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vnni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fsrm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='gfni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='ibrs-all'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='la57'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='taa-no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='vaes'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Icelake-Server-v5'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bitalg'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512ifma'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vbmi'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vnni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fsrm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='gfni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='ibrs-all'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='la57'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='taa-no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='vaes'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='xsaves'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Icelake-Server-v6'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bitalg'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512ifma'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vbmi'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vnni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fsrm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='gfni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='ibrs-all'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='la57'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='taa-no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='vaes'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='xsaves'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Icelake-Server-v7'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bitalg'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512ifma'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vbmi'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vnni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fsrm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='gfni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='hle'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='ibrs-all'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='la57'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='rtm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='taa-no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='vaes'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='xsaves'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='IvyBridge'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='IvyBridge-IBRS'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='IvyBridge-v1'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='IvyBridge-v2'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='KnightsMill'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512-4fmaps'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512-4vnniw'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512er'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512pf'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='ss'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='KnightsMill-v1'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512-4fmaps'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512-4vnniw'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512er'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512pf'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='ss'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Opteron_G4'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fma4'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='xop'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Opteron_G4-v1'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fma4'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='xop'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Opteron_G5'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fma4'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='tbm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='xop'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Opteron_G5-v1'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fma4'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='tbm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='xop'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='SapphireRapids'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='amx-bf16'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='amx-int8'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='amx-tile'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx-vnni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512-bf16'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512-fp16'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bitalg'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512ifma'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vbmi'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vnni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='bus-lock-detect'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fsrc'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fsrm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fsrs'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fzrm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='gfni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='hle'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='ibrs-all'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='la57'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='rtm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='serialize'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='taa-no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='tsx-ldtrk'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='vaes'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='xfd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='xsaves'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='SapphireRapids-v1'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='amx-bf16'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='amx-int8'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='amx-tile'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx-vnni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512-bf16'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512-fp16'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bitalg'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512ifma'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vbmi'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vnni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='bus-lock-detect'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fsrc'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fsrm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fsrs'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fzrm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='gfni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='hle'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='ibrs-all'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='la57'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='rtm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='serialize'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='taa-no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='tsx-ldtrk'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='vaes'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='xfd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='xsaves'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='SapphireRapids-v2'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='amx-bf16'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='amx-int8'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='amx-tile'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx-vnni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512-bf16'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512-fp16'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bitalg'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512ifma'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vbmi'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vnni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='bus-lock-detect'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fbsdp-no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fsrc'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fsrm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fsrs'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fzrm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='gfni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='hle'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='ibrs-all'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='la57'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='psdp-no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='rtm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='sbdr-ssdp-no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='serialize'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='taa-no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='tsx-ldtrk'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='vaes'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='xfd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='xsaves'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='SapphireRapids-v3'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='amx-bf16'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='amx-int8'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='amx-tile'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx-vnni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512-bf16'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512-fp16'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bitalg'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512ifma'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vbmi'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vnni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='bus-lock-detect'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='cldemote'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fbsdp-no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fsrc'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fsrm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fsrs'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fzrm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='gfni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='hle'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='ibrs-all'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='la57'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='movdir64b'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='movdiri'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='psdp-no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='rtm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='sbdr-ssdp-no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='serialize'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='ss'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='taa-no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='tsx-ldtrk'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='vaes'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='xfd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='xsaves'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='SierraForest'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx-ifma'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx-ne-convert'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx-vnni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx-vnni-int8'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='bus-lock-detect'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='cmpccxadd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fbsdp-no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fsrm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fsrs'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='gfni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='ibrs-all'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='mcdt-no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pbrsb-no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='psdp-no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='sbdr-ssdp-no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='serialize'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='vaes'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='xsaves'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='SierraForest-v1'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx-ifma'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx-ne-convert'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx-vnni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx-vnni-int8'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='bus-lock-detect'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='cmpccxadd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fbsdp-no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fsrm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fsrs'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='gfni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='ibrs-all'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='mcdt-no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pbrsb-no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='psdp-no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='sbdr-ssdp-no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='serialize'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='vaes'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='xsaves'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Skylake-Client'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='hle'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='rtm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Skylake-Client-IBRS'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='hle'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='rtm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Skylake-Client-v1'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='hle'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='rtm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Skylake-Client-v2'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='hle'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='rtm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Skylake-Client-v3'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Skylake-Client-v4'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='xsaves'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Skylake-Server'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='hle'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='rtm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Skylake-Server-IBRS'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='hle'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='rtm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Skylake-Server-v1'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='hle'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='rtm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Skylake-Server-v2'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='hle'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='rtm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Skylake-Server-v3'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Skylake-Server-v4'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Skylake-Server-v5'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='xsaves'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Snowridge'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='cldemote'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='core-capability'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='gfni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='movdir64b'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='movdiri'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='mpx'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='split-lock-detect'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Snowridge-v1'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='cldemote'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='core-capability'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='gfni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='movdir64b'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='movdiri'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='mpx'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='split-lock-detect'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Snowridge-v2'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='cldemote'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='core-capability'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='gfni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='movdir64b'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='movdiri'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='split-lock-detect'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Snowridge-v3'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='cldemote'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='core-capability'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='gfni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='movdir64b'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='movdiri'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='split-lock-detect'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='xsaves'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Snowridge-v4'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='cldemote'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='gfni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='movdir64b'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='movdiri'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='xsaves'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='athlon'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='3dnow'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='3dnowext'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='athlon-v1'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='3dnow'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='3dnowext'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='core2duo'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='ss'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='core2duo-v1'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='ss'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='coreduo'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='ss'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='coreduo-v1'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='ss'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='n270'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='ss'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='n270-v1'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='ss'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='phenom'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='3dnow'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='3dnowext'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='phenom-v1'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='3dnow'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='3dnowext'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     </mode>
Nov 29 06:43:20 compute-0 nova_compute[251877]:   </cpu>
Nov 29 06:43:20 compute-0 nova_compute[251877]:   <memoryBacking supported='yes'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     <enum name='sourceType'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <value>file</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <value>anonymous</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <value>memfd</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     </enum>
Nov 29 06:43:20 compute-0 nova_compute[251877]:   </memoryBacking>
Nov 29 06:43:20 compute-0 nova_compute[251877]:   <devices>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     <disk supported='yes'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <enum name='diskDevice'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>disk</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>cdrom</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>floppy</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>lun</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </enum>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <enum name='bus'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>ide</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>fdc</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>scsi</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>virtio</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>usb</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>sata</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </enum>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <enum name='model'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>virtio</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>virtio-transitional</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>virtio-non-transitional</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </enum>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     </disk>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     <graphics supported='yes'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <enum name='type'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>vnc</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>egl-headless</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>dbus</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </enum>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     </graphics>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     <video supported='yes'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <enum name='modelType'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>vga</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>cirrus</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>virtio</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>none</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>bochs</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>ramfb</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </enum>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     </video>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     <hostdev supported='yes'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <enum name='mode'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>subsystem</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </enum>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <enum name='startupPolicy'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>default</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>mandatory</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>requisite</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>optional</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </enum>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <enum name='subsysType'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>usb</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>pci</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>scsi</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </enum>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <enum name='capsType'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <enum name='pciBackend'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     </hostdev>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     <rng supported='yes'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <enum name='model'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>virtio</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>virtio-transitional</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>virtio-non-transitional</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </enum>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <enum name='backendModel'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>random</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>egd</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>builtin</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </enum>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     </rng>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     <filesystem supported='yes'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <enum name='driverType'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>path</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>handle</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>virtiofs</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </enum>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     </filesystem>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     <tpm supported='yes'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <enum name='model'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>tpm-tis</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>tpm-crb</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </enum>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <enum name='backendModel'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>emulator</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>external</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </enum>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <enum name='backendVersion'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>2.0</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </enum>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     </tpm>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     <redirdev supported='yes'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <enum name='bus'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>usb</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </enum>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     </redirdev>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     <channel supported='yes'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <enum name='type'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>pty</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>unix</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </enum>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     </channel>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     <crypto supported='yes'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <enum name='model'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <enum name='type'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>qemu</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </enum>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <enum name='backendModel'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>builtin</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </enum>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     </crypto>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     <interface supported='yes'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <enum name='backendType'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>default</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>passt</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </enum>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     </interface>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     <panic supported='yes'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <enum name='model'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>isa</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>hyperv</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </enum>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     </panic>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     <console supported='yes'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <enum name='type'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>null</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>vc</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>pty</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>dev</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>file</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>pipe</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>stdio</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>udp</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>tcp</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>unix</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>qemu-vdagent</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>dbus</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </enum>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     </console>
Nov 29 06:43:20 compute-0 nova_compute[251877]:   </devices>
Nov 29 06:43:20 compute-0 nova_compute[251877]:   <features>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     <gic supported='no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     <vmcoreinfo supported='yes'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     <genid supported='yes'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     <backingStoreInput supported='yes'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     <backup supported='yes'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     <async-teardown supported='yes'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     <ps2 supported='yes'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     <sev supported='no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     <sgx supported='no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     <hyperv supported='yes'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <enum name='features'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>relaxed</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>vapic</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>spinlocks</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>vpindex</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>runtime</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>synic</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>stimer</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>reset</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>vendor_id</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>frequencies</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>reenlightenment</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>tlbflush</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>ipi</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>avic</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>emsr_bitmap</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>xmm_input</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </enum>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <defaults>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <spinlocks>4095</spinlocks>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <stimer_direct>on</stimer_direct>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <tlbflush_direct>on</tlbflush_direct>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <tlbflush_extended>on</tlbflush_extended>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </defaults>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     </hyperv>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     <launchSecurity supported='yes'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <enum name='sectype'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>tdx</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </enum>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     </launchSecurity>
Nov 29 06:43:20 compute-0 nova_compute[251877]:   </features>
Nov 29 06:43:20 compute-0 nova_compute[251877]: </domainCapabilities>
Nov 29 06:43:20 compute-0 nova_compute[251877]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 29 06:43:20 compute-0 nova_compute[251877]: 2025-11-29 06:43:20.005 251881 DEBUG nova.virt.libvirt.host [None req-e073a6d6-a095-4d41-95db-624faa93ff07 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Nov 29 06:43:20 compute-0 nova_compute[251877]: 2025-11-29 06:43:20.010 251881 DEBUG nova.virt.libvirt.host [None req-e073a6d6-a095-4d41-95db-624faa93ff07 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Nov 29 06:43:20 compute-0 nova_compute[251877]: <domainCapabilities>
Nov 29 06:43:20 compute-0 nova_compute[251877]:   <path>/usr/libexec/qemu-kvm</path>
Nov 29 06:43:20 compute-0 nova_compute[251877]:   <domain>kvm</domain>
Nov 29 06:43:20 compute-0 nova_compute[251877]:   <machine>pc-q35-rhel9.8.0</machine>
Nov 29 06:43:20 compute-0 nova_compute[251877]:   <arch>x86_64</arch>
Nov 29 06:43:20 compute-0 nova_compute[251877]:   <vcpu max='4096'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:   <iothreads supported='yes'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:   <os supported='yes'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     <enum name='firmware'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <value>efi</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     </enum>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     <loader supported='yes'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <enum name='type'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>rom</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>pflash</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </enum>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <enum name='readonly'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>yes</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>no</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </enum>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <enum name='secure'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>yes</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>no</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </enum>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     </loader>
Nov 29 06:43:20 compute-0 nova_compute[251877]:   </os>
Nov 29 06:43:20 compute-0 nova_compute[251877]:   <cpu>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     <mode name='host-passthrough' supported='yes'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <enum name='hostPassthroughMigratable'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>on</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>off</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </enum>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     </mode>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     <mode name='maximum' supported='yes'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <enum name='maximumMigratable'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>on</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>off</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </enum>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     </mode>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     <mode name='host-model' supported='yes'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model fallback='forbid'>EPYC-Rome</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <vendor>AMD</vendor>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <maxphysaddr mode='passthrough' limit='40'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <feature policy='require' name='x2apic'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <feature policy='require' name='tsc-deadline'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <feature policy='require' name='hypervisor'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <feature policy='require' name='tsc_adjust'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <feature policy='require' name='spec-ctrl'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <feature policy='require' name='stibp'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <feature policy='require' name='ssbd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <feature policy='require' name='cmp_legacy'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <feature policy='require' name='overflow-recov'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <feature policy='require' name='succor'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <feature policy='require' name='ibrs'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <feature policy='require' name='amd-ssbd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <feature policy='require' name='virt-ssbd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <feature policy='require' name='lbrv'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <feature policy='require' name='tsc-scale'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <feature policy='require' name='vmcb-clean'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <feature policy='require' name='flushbyasid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <feature policy='require' name='pause-filter'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <feature policy='require' name='pfthreshold'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <feature policy='require' name='svme-addr-chk'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <feature policy='disable' name='xsaves'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     </mode>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     <mode name='custom' supported='yes'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Broadwell'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='hle'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='rtm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Broadwell-IBRS'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='hle'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='rtm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Broadwell-noTSX'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Broadwell-noTSX-IBRS'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Broadwell-v1'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='hle'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='rtm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Broadwell-v2'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Broadwell-v3'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='hle'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='rtm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Broadwell-v4'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Cascadelake-Server'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vnni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='hle'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='rtm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vnni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='ibrs-all'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Cascadelake-Server-v1'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vnni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='hle'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='rtm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Cascadelake-Server-v2'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vnni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='hle'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='ibrs-all'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='rtm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Cascadelake-Server-v3'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vnni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='ibrs-all'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Cascadelake-Server-v4'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vnni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='ibrs-all'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Cascadelake-Server-v5'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vnni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='ibrs-all'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='xsaves'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Cooperlake'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512-bf16'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vnni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='hle'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='ibrs-all'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='rtm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='taa-no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Cooperlake-v1'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512-bf16'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vnni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='hle'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='ibrs-all'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='rtm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='taa-no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Cooperlake-v2'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512-bf16'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vnni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='hle'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='ibrs-all'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='rtm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='taa-no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='xsaves'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Denverton'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='mpx'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Denverton-v1'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='mpx'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Denverton-v2'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Denverton-v3'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='xsaves'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Dhyana-v2'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='xsaves'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='EPYC-Genoa'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='amd-psfd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='auto-ibrs'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512-bf16'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bitalg'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512ifma'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vbmi'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vnni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fsrm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='gfni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='la57'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='no-nested-data-bp'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='null-sel-clr-base'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='stibp-always-on'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='vaes'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='xsaves'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='EPYC-Genoa-v1'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='amd-psfd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='auto-ibrs'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512-bf16'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bitalg'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512ifma'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vbmi'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vnni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fsrm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='gfni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='la57'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='no-nested-data-bp'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='null-sel-clr-base'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='stibp-always-on'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='vaes'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='xsaves'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='EPYC-Milan'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fsrm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='xsaves'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='EPYC-Milan-v1'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fsrm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='xsaves'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='EPYC-Milan-v2'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='amd-psfd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fsrm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='no-nested-data-bp'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='null-sel-clr-base'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='stibp-always-on'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='vaes'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='xsaves'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='EPYC-Rome'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='xsaves'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='EPYC-Rome-v1'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='xsaves'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='EPYC-Rome-v2'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='xsaves'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='EPYC-Rome-v3'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='xsaves'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='EPYC-v3'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='xsaves'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='EPYC-v4'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='xsaves'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='GraniteRapids'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='amx-bf16'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='amx-fp16'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='amx-int8'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='amx-tile'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx-vnni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512-bf16'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512-fp16'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bitalg'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512ifma'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vbmi'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vnni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='bus-lock-detect'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fbsdp-no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fsrc'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fsrm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fsrs'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fzrm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='gfni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='hle'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='ibrs-all'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='la57'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='mcdt-no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pbrsb-no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='prefetchiti'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='psdp-no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='rtm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='sbdr-ssdp-no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='serialize'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='taa-no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='tsx-ldtrk'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='vaes'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='xfd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='xsaves'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='GraniteRapids-v1'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='amx-bf16'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='amx-fp16'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='amx-int8'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='amx-tile'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx-vnni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512-bf16'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512-fp16'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bitalg'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512ifma'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vbmi'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vnni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='bus-lock-detect'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fbsdp-no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fsrc'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fsrm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fsrs'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fzrm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='gfni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='hle'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='ibrs-all'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='la57'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='mcdt-no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pbrsb-no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='prefetchiti'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='psdp-no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='rtm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='sbdr-ssdp-no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='serialize'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='taa-no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='tsx-ldtrk'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='vaes'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='xfd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='xsaves'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='GraniteRapids-v2'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='amx-bf16'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='amx-fp16'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='amx-int8'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='amx-tile'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx-vnni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx10'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx10-128'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx10-256'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx10-512'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512-bf16'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512-fp16'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bitalg'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512ifma'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vbmi'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vnni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='bus-lock-detect'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='cldemote'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fbsdp-no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fsrc'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fsrm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fsrs'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fzrm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='gfni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='hle'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='ibrs-all'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='la57'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='mcdt-no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='movdir64b'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='movdiri'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pbrsb-no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='prefetchiti'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='psdp-no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='rtm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='sbdr-ssdp-no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='serialize'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='ss'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='taa-no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='tsx-ldtrk'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='vaes'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='xfd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='xsaves'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Haswell'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='hle'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='rtm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Haswell-IBRS'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='hle'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='rtm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Haswell-noTSX'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Haswell-noTSX-IBRS'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Haswell-v1'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='hle'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='rtm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Haswell-v2'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Haswell-v3'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='hle'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='rtm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Haswell-v4'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Icelake-Server'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bitalg'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vbmi'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vnni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='gfni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='hle'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='la57'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='rtm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='vaes'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Icelake-Server-noTSX'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bitalg'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vbmi'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vnni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='gfni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='la57'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='vaes'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Icelake-Server-v1'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bitalg'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vbmi'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vnni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='gfni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='hle'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='la57'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='rtm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='vaes'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Icelake-Server-v2'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bitalg'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vbmi'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vnni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='gfni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='la57'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='vaes'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Icelake-Server-v3'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bitalg'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vbmi'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vnni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='gfni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='ibrs-all'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='la57'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='taa-no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='vaes'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Icelake-Server-v4'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bitalg'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512ifma'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vbmi'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vnni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fsrm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='gfni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='ibrs-all'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='la57'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='taa-no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='vaes'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Icelake-Server-v5'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bitalg'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512ifma'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vbmi'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vnni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fsrm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='gfni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='ibrs-all'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='la57'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='taa-no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='vaes'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='xsaves'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Icelake-Server-v6'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bitalg'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512ifma'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vbmi'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vnni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fsrm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='gfni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='ibrs-all'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='la57'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='taa-no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='vaes'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='xsaves'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Icelake-Server-v7'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bitalg'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512ifma'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vbmi'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vnni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fsrm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='gfni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='hle'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='ibrs-all'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='la57'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='rtm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='taa-no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='vaes'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='xsaves'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='IvyBridge'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='IvyBridge-IBRS'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='IvyBridge-v1'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='IvyBridge-v2'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='KnightsMill'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512-4fmaps'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512-4vnniw'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512er'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512pf'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='ss'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='KnightsMill-v1'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512-4fmaps'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512-4vnniw'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512er'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512pf'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='ss'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Opteron_G4'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fma4'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='xop'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Opteron_G4-v1'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fma4'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='xop'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Opteron_G5'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fma4'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='tbm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='xop'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Opteron_G5-v1'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fma4'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='tbm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='xop'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='SapphireRapids'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='amx-bf16'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='amx-int8'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='amx-tile'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx-vnni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512-bf16'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512-fp16'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bitalg'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512ifma'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vbmi'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vnni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='bus-lock-detect'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fsrc'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fsrm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fsrs'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fzrm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='gfni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='hle'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='ibrs-all'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='la57'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='rtm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='serialize'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='taa-no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='tsx-ldtrk'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='vaes'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='xfd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='xsaves'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='SapphireRapids-v1'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='amx-bf16'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='amx-int8'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='amx-tile'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx-vnni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512-bf16'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512-fp16'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bitalg'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512ifma'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vbmi'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vnni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='bus-lock-detect'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fsrc'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fsrm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fsrs'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fzrm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='gfni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='hle'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='ibrs-all'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='la57'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='rtm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='serialize'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='taa-no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='tsx-ldtrk'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='vaes'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='xfd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='xsaves'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='SapphireRapids-v2'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='amx-bf16'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='amx-int8'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='amx-tile'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx-vnni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512-bf16'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512-fp16'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bitalg'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512ifma'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vbmi'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vnni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='bus-lock-detect'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fbsdp-no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fsrc'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fsrm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fsrs'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fzrm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='gfni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='hle'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='ibrs-all'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='la57'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='psdp-no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='rtm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='sbdr-ssdp-no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='serialize'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='taa-no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='tsx-ldtrk'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='vaes'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='xfd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='xsaves'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='SapphireRapids-v3'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='amx-bf16'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='amx-int8'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='amx-tile'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx-vnni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512-bf16'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512-fp16'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bitalg'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512ifma'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vbmi'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vnni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='bus-lock-detect'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='cldemote'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fbsdp-no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fsrc'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fsrm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fsrs'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fzrm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='gfni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='hle'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='ibrs-all'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='la57'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='movdir64b'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='movdiri'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='psdp-no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='rtm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='sbdr-ssdp-no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='serialize'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='ss'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='taa-no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='tsx-ldtrk'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='vaes'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='xfd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='xsaves'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='SierraForest'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx-ifma'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx-ne-convert'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx-vnni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx-vnni-int8'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='bus-lock-detect'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='cmpccxadd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fbsdp-no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fsrm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fsrs'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='gfni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='ibrs-all'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='mcdt-no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pbrsb-no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='psdp-no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='sbdr-ssdp-no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='serialize'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='vaes'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='xsaves'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='SierraForest-v1'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx-ifma'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx-ne-convert'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx-vnni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx-vnni-int8'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='bus-lock-detect'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='cmpccxadd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fbsdp-no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fsrm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fsrs'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='gfni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='ibrs-all'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='mcdt-no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pbrsb-no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='psdp-no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='sbdr-ssdp-no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='serialize'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='vaes'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='xsaves'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Skylake-Client'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='hle'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='rtm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Skylake-Client-IBRS'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='hle'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='rtm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Skylake-Client-v1'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='hle'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='rtm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Skylake-Client-v2'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='hle'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='rtm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Skylake-Client-v3'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Skylake-Client-v4'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='xsaves'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Skylake-Server'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='hle'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='rtm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Skylake-Server-IBRS'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='hle'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='rtm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Skylake-Server-v1'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='hle'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='rtm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Skylake-Server-v2'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='hle'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='rtm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Skylake-Server-v3'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Skylake-Server-v4'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Skylake-Server-v5'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='xsaves'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Snowridge'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='cldemote'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='core-capability'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='gfni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='movdir64b'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='movdiri'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='mpx'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='split-lock-detect'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Snowridge-v1'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='cldemote'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='core-capability'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='gfni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='movdir64b'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='movdiri'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='mpx'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='split-lock-detect'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Snowridge-v2'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='cldemote'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='core-capability'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='gfni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='movdir64b'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='movdiri'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='split-lock-detect'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Snowridge-v3'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='cldemote'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='core-capability'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='gfni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='movdir64b'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='movdiri'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='split-lock-detect'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='xsaves'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Snowridge-v4'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='cldemote'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='gfni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='movdir64b'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='movdiri'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='xsaves'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='athlon'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='3dnow'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='3dnowext'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='athlon-v1'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='3dnow'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='3dnowext'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='core2duo'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='ss'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='core2duo-v1'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='ss'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='coreduo'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='ss'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='coreduo-v1'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='ss'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='n270'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='ss'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='n270-v1'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='ss'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='phenom'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='3dnow'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='3dnowext'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='phenom-v1'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='3dnow'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='3dnowext'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     </mode>
Nov 29 06:43:20 compute-0 nova_compute[251877]:   </cpu>
Nov 29 06:43:20 compute-0 nova_compute[251877]:   <memoryBacking supported='yes'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     <enum name='sourceType'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <value>file</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <value>anonymous</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <value>memfd</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     </enum>
Nov 29 06:43:20 compute-0 nova_compute[251877]:   </memoryBacking>
Nov 29 06:43:20 compute-0 nova_compute[251877]:   <devices>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     <disk supported='yes'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <enum name='diskDevice'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>disk</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>cdrom</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>floppy</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>lun</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </enum>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <enum name='bus'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>fdc</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>scsi</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>virtio</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>usb</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>sata</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </enum>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <enum name='model'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>virtio</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>virtio-transitional</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>virtio-non-transitional</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </enum>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     </disk>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     <graphics supported='yes'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <enum name='type'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>vnc</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>egl-headless</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>dbus</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </enum>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     </graphics>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     <video supported='yes'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <enum name='modelType'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>vga</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>cirrus</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>virtio</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>none</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>bochs</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>ramfb</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </enum>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     </video>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     <hostdev supported='yes'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <enum name='mode'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>subsystem</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </enum>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <enum name='startupPolicy'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>default</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>mandatory</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>requisite</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>optional</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </enum>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <enum name='subsysType'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>usb</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>pci</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>scsi</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </enum>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <enum name='capsType'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <enum name='pciBackend'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     </hostdev>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     <rng supported='yes'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <enum name='model'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>virtio</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>virtio-transitional</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>virtio-non-transitional</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </enum>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <enum name='backendModel'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>random</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>egd</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>builtin</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </enum>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     </rng>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     <filesystem supported='yes'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <enum name='driverType'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>path</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>handle</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>virtiofs</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </enum>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     </filesystem>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     <tpm supported='yes'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <enum name='model'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>tpm-tis</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>tpm-crb</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </enum>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <enum name='backendModel'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>emulator</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>external</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </enum>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <enum name='backendVersion'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>2.0</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </enum>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     </tpm>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     <redirdev supported='yes'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <enum name='bus'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>usb</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </enum>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     </redirdev>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     <channel supported='yes'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <enum name='type'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>pty</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>unix</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </enum>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     </channel>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     <crypto supported='yes'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <enum name='model'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <enum name='type'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>qemu</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </enum>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <enum name='backendModel'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>builtin</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </enum>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     </crypto>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     <interface supported='yes'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <enum name='backendType'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>default</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>passt</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </enum>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     </interface>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     <panic supported='yes'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <enum name='model'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>isa</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>hyperv</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </enum>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     </panic>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     <console supported='yes'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <enum name='type'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>null</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>vc</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>pty</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>dev</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>file</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>pipe</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>stdio</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>udp</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>tcp</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>unix</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>qemu-vdagent</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>dbus</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </enum>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     </console>
Nov 29 06:43:20 compute-0 nova_compute[251877]:   </devices>
Nov 29 06:43:20 compute-0 nova_compute[251877]:   <features>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     <gic supported='no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     <vmcoreinfo supported='yes'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     <genid supported='yes'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     <backingStoreInput supported='yes'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     <backup supported='yes'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     <async-teardown supported='yes'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     <ps2 supported='yes'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     <sev supported='no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     <sgx supported='no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     <hyperv supported='yes'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <enum name='features'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>relaxed</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>vapic</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>spinlocks</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>vpindex</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>runtime</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>synic</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>stimer</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>reset</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>vendor_id</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>frequencies</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>reenlightenment</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>tlbflush</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>ipi</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>avic</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>emsr_bitmap</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>xmm_input</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </enum>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <defaults>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <spinlocks>4095</spinlocks>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <stimer_direct>on</stimer_direct>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <tlbflush_direct>on</tlbflush_direct>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <tlbflush_extended>on</tlbflush_extended>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </defaults>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     </hyperv>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     <launchSecurity supported='yes'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <enum name='sectype'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>tdx</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </enum>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     </launchSecurity>
Nov 29 06:43:20 compute-0 nova_compute[251877]:   </features>
Nov 29 06:43:20 compute-0 nova_compute[251877]: </domainCapabilities>
Nov 29 06:43:20 compute-0 nova_compute[251877]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 29 06:43:20 compute-0 nova_compute[251877]: 2025-11-29 06:43:20.076 251881 DEBUG nova.virt.libvirt.host [None req-e073a6d6-a095-4d41-95db-624faa93ff07 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Nov 29 06:43:20 compute-0 nova_compute[251877]: <domainCapabilities>
Nov 29 06:43:20 compute-0 nova_compute[251877]:   <path>/usr/libexec/qemu-kvm</path>
Nov 29 06:43:20 compute-0 nova_compute[251877]:   <domain>kvm</domain>
Nov 29 06:43:20 compute-0 nova_compute[251877]:   <machine>pc-i440fx-rhel7.6.0</machine>
Nov 29 06:43:20 compute-0 nova_compute[251877]:   <arch>x86_64</arch>
Nov 29 06:43:20 compute-0 nova_compute[251877]:   <vcpu max='240'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:   <iothreads supported='yes'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:   <os supported='yes'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     <enum name='firmware'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     <loader supported='yes'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <enum name='type'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>rom</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>pflash</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </enum>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <enum name='readonly'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>yes</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>no</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </enum>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <enum name='secure'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>no</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </enum>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     </loader>
Nov 29 06:43:20 compute-0 nova_compute[251877]:   </os>
Nov 29 06:43:20 compute-0 nova_compute[251877]:   <cpu>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     <mode name='host-passthrough' supported='yes'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <enum name='hostPassthroughMigratable'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>on</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>off</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </enum>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     </mode>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     <mode name='maximum' supported='yes'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <enum name='maximumMigratable'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>on</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>off</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </enum>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     </mode>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     <mode name='host-model' supported='yes'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model fallback='forbid'>EPYC-Rome</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <vendor>AMD</vendor>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <maxphysaddr mode='passthrough' limit='40'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <feature policy='require' name='x2apic'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <feature policy='require' name='tsc-deadline'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <feature policy='require' name='hypervisor'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <feature policy='require' name='tsc_adjust'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <feature policy='require' name='spec-ctrl'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <feature policy='require' name='stibp'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <feature policy='require' name='ssbd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <feature policy='require' name='cmp_legacy'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <feature policy='require' name='overflow-recov'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <feature policy='require' name='succor'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <feature policy='require' name='ibrs'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <feature policy='require' name='amd-ssbd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <feature policy='require' name='virt-ssbd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <feature policy='require' name='lbrv'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <feature policy='require' name='tsc-scale'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <feature policy='require' name='vmcb-clean'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <feature policy='require' name='flushbyasid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <feature policy='require' name='pause-filter'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <feature policy='require' name='pfthreshold'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <feature policy='require' name='svme-addr-chk'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <feature policy='disable' name='xsaves'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     </mode>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     <mode name='custom' supported='yes'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Broadwell'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='hle'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='rtm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Broadwell-IBRS'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='hle'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='rtm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Broadwell-noTSX'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Broadwell-noTSX-IBRS'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Broadwell-v1'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='hle'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='rtm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Broadwell-v2'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Broadwell-v3'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='hle'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='rtm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Broadwell-v4'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Cascadelake-Server'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vnni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='hle'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='rtm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vnni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='ibrs-all'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Cascadelake-Server-v1'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vnni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='hle'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='rtm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Cascadelake-Server-v2'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vnni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='hle'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='ibrs-all'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='rtm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Cascadelake-Server-v3'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vnni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='ibrs-all'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Cascadelake-Server-v4'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vnni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='ibrs-all'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Cascadelake-Server-v5'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vnni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='ibrs-all'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='xsaves'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Cooperlake'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512-bf16'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vnni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='hle'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='ibrs-all'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='rtm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='taa-no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Cooperlake-v1'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512-bf16'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vnni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='hle'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='ibrs-all'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='rtm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='taa-no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Cooperlake-v2'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512-bf16'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vnni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='hle'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='ibrs-all'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='rtm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='taa-no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='xsaves'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Denverton'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='mpx'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Denverton-v1'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='mpx'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Denverton-v2'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Denverton-v3'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='xsaves'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Dhyana-v2'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='xsaves'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='EPYC-Genoa'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='amd-psfd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='auto-ibrs'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512-bf16'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bitalg'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512ifma'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vbmi'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vnni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fsrm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='gfni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='la57'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='no-nested-data-bp'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='null-sel-clr-base'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='stibp-always-on'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='vaes'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='xsaves'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='EPYC-Genoa-v1'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='amd-psfd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='auto-ibrs'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512-bf16'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bitalg'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512ifma'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vbmi'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vnni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fsrm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='gfni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='la57'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='no-nested-data-bp'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='null-sel-clr-base'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='stibp-always-on'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='vaes'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='xsaves'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='EPYC-Milan'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fsrm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='xsaves'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='EPYC-Milan-v1'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fsrm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='xsaves'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='EPYC-Milan-v2'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='amd-psfd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fsrm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='no-nested-data-bp'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='null-sel-clr-base'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='stibp-always-on'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='vaes'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='xsaves'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='EPYC-Rome'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='xsaves'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='EPYC-Rome-v1'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='xsaves'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='EPYC-Rome-v2'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='xsaves'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='EPYC-Rome-v3'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='xsaves'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='EPYC-v3'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='xsaves'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='EPYC-v4'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='xsaves'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='GraniteRapids'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='amx-bf16'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='amx-fp16'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='amx-int8'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='amx-tile'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx-vnni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512-bf16'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512-fp16'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bitalg'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512ifma'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vbmi'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vnni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='bus-lock-detect'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fbsdp-no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fsrc'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fsrm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fsrs'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fzrm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='gfni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='hle'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='ibrs-all'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='la57'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='mcdt-no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pbrsb-no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='prefetchiti'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='psdp-no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='rtm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='sbdr-ssdp-no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='serialize'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='taa-no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='tsx-ldtrk'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='vaes'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='xfd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='xsaves'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='GraniteRapids-v1'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='amx-bf16'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='amx-fp16'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='amx-int8'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='amx-tile'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx-vnni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512-bf16'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512-fp16'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bitalg'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512ifma'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vbmi'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vnni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='bus-lock-detect'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fbsdp-no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fsrc'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fsrm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fsrs'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fzrm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='gfni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='hle'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='ibrs-all'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='la57'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='mcdt-no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pbrsb-no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='prefetchiti'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='psdp-no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='rtm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='sbdr-ssdp-no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='serialize'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='taa-no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='tsx-ldtrk'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='vaes'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='xfd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='xsaves'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='GraniteRapids-v2'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='amx-bf16'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='amx-fp16'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='amx-int8'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='amx-tile'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx-vnni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx10'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx10-128'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx10-256'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx10-512'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512-bf16'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512-fp16'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bitalg'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512ifma'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vbmi'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vnni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='bus-lock-detect'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='cldemote'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fbsdp-no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fsrc'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fsrm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fsrs'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fzrm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='gfni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='hle'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='ibrs-all'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='la57'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='mcdt-no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='movdir64b'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='movdiri'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pbrsb-no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='prefetchiti'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='psdp-no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='rtm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='sbdr-ssdp-no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='serialize'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='ss'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='taa-no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='tsx-ldtrk'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='vaes'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='xfd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='xsaves'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Haswell'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='hle'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='rtm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Haswell-IBRS'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='hle'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='rtm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Haswell-noTSX'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Haswell-noTSX-IBRS'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Haswell-v1'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='hle'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='rtm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Haswell-v2'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Haswell-v3'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='hle'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='rtm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Haswell-v4'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Icelake-Server'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bitalg'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vbmi'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vnni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='gfni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='hle'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='la57'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='rtm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='vaes'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Icelake-Server-noTSX'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bitalg'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vbmi'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vnni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='gfni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='la57'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='vaes'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Icelake-Server-v1'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bitalg'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vbmi'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vnni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='gfni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='hle'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='la57'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='rtm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='vaes'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Icelake-Server-v2'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bitalg'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vbmi'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vnni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='gfni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='la57'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='vaes'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Icelake-Server-v3'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bitalg'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vbmi'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vnni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='gfni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='ibrs-all'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='la57'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='taa-no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='vaes'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Icelake-Server-v4'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bitalg'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512ifma'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vbmi'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vnni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fsrm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='gfni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='ibrs-all'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='la57'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='taa-no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='vaes'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Icelake-Server-v5'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bitalg'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512ifma'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vbmi'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vnni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fsrm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='gfni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='ibrs-all'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='la57'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='taa-no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='vaes'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='xsaves'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Icelake-Server-v6'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bitalg'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512ifma'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vbmi'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vnni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fsrm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='gfni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='ibrs-all'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='la57'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='taa-no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='vaes'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='xsaves'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Icelake-Server-v7'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bitalg'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512ifma'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vbmi'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vnni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fsrm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='gfni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='hle'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='ibrs-all'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='la57'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='rtm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='taa-no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='vaes'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='xsaves'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='IvyBridge'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='IvyBridge-IBRS'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='IvyBridge-v1'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='IvyBridge-v2'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='KnightsMill'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512-4fmaps'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512-4vnniw'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512er'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512pf'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='ss'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='KnightsMill-v1'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512-4fmaps'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512-4vnniw'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512er'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512pf'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='ss'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Opteron_G4'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fma4'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='xop'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Opteron_G4-v1'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fma4'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='xop'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Opteron_G5'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fma4'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='tbm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='xop'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Opteron_G5-v1'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fma4'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='tbm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='xop'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='SapphireRapids'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='amx-bf16'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='amx-int8'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='amx-tile'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx-vnni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512-bf16'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512-fp16'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bitalg'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512ifma'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vbmi'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vnni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='bus-lock-detect'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fsrc'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fsrm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fsrs'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fzrm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='gfni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='hle'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='ibrs-all'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='la57'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='rtm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='serialize'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='taa-no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='tsx-ldtrk'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='vaes'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='xfd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='xsaves'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='SapphireRapids-v1'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='amx-bf16'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='amx-int8'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='amx-tile'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx-vnni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512-bf16'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512-fp16'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bitalg'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512ifma'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vbmi'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vnni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='bus-lock-detect'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fsrc'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fsrm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fsrs'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fzrm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='gfni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='hle'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='ibrs-all'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='la57'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='rtm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='serialize'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='taa-no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='tsx-ldtrk'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='vaes'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='xfd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='xsaves'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='SapphireRapids-v2'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='amx-bf16'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='amx-int8'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='amx-tile'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx-vnni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512-bf16'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512-fp16'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bitalg'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512ifma'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vbmi'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vnni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='bus-lock-detect'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fbsdp-no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fsrc'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fsrm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fsrs'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fzrm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='gfni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='hle'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='ibrs-all'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='la57'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='psdp-no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='rtm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='sbdr-ssdp-no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='serialize'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='taa-no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='tsx-ldtrk'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='vaes'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='xfd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='xsaves'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='SapphireRapids-v3'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='amx-bf16'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='amx-int8'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='amx-tile'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx-vnni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512-bf16'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512-fp16'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512-vpopcntdq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bitalg'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512ifma'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vbmi'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vbmi2'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vnni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='bus-lock-detect'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='cldemote'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fbsdp-no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fsrc'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fsrm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fsrs'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fzrm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='gfni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='hle'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='ibrs-all'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='la57'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='movdir64b'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='movdiri'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='psdp-no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='rtm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='sbdr-ssdp-no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='serialize'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='ss'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='taa-no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='tsx-ldtrk'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='vaes'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='xfd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='xsaves'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='SierraForest'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx-ifma'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx-ne-convert'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx-vnni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx-vnni-int8'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='bus-lock-detect'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='cmpccxadd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fbsdp-no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fsrm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fsrs'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='gfni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='ibrs-all'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='mcdt-no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pbrsb-no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='psdp-no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='sbdr-ssdp-no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='serialize'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='vaes'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='xsaves'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='SierraForest-v1'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx-ifma'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx-ne-convert'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx-vnni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx-vnni-int8'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='bus-lock-detect'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='cmpccxadd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fbsdp-no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fsrm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='fsrs'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='gfni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='ibrs-all'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='mcdt-no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pbrsb-no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='psdp-no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='sbdr-ssdp-no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='serialize'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='vaes'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='vpclmulqdq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='xsaves'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Skylake-Client'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='hle'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='rtm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Skylake-Client-IBRS'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='hle'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='rtm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Skylake-Client-v1'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='hle'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='rtm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Skylake-Client-v2'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='hle'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='rtm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Skylake-Client-v3'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Skylake-Client-v4'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='xsaves'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Skylake-Server'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='hle'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='rtm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Skylake-Server-IBRS'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='hle'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='rtm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Skylake-Server-v1'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='hle'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='rtm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Skylake-Server-v2'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='hle'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='rtm'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Skylake-Server-v3'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Skylake-Server-v4'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Skylake-Server-v5'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512bw'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512cd'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512dq'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512f'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='avx512vl'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='invpcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pcid'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='pku'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='xsaves'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Snowridge'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='cldemote'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='core-capability'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='gfni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='movdir64b'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='movdiri'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='mpx'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='split-lock-detect'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Snowridge-v1'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='cldemote'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='core-capability'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='gfni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='movdir64b'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='movdiri'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='mpx'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='split-lock-detect'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Snowridge-v2'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='cldemote'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='core-capability'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='gfni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='movdir64b'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='movdiri'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='split-lock-detect'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Snowridge-v3'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='cldemote'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='core-capability'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='gfni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='movdir64b'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='movdiri'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='split-lock-detect'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='xsaves'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='Snowridge-v4'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='cldemote'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='erms'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='gfni'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='movdir64b'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='movdiri'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='xsaves'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='athlon'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='3dnow'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='3dnowext'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='athlon-v1'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='3dnow'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='3dnowext'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='core2duo'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='ss'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='core2duo-v1'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='ss'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='coreduo'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='ss'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='coreduo-v1'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='ss'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='n270'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='ss'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='n270-v1'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='ss'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='phenom'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='3dnow'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='3dnowext'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <blockers model='phenom-v1'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='3dnow'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <feature name='3dnowext'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </blockers>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     </mode>
Nov 29 06:43:20 compute-0 nova_compute[251877]:   </cpu>
Nov 29 06:43:20 compute-0 nova_compute[251877]:   <memoryBacking supported='yes'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     <enum name='sourceType'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <value>file</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <value>anonymous</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <value>memfd</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     </enum>
Nov 29 06:43:20 compute-0 nova_compute[251877]:   </memoryBacking>
Nov 29 06:43:20 compute-0 nova_compute[251877]:   <devices>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     <disk supported='yes'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <enum name='diskDevice'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>disk</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>cdrom</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>floppy</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>lun</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </enum>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <enum name='bus'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>ide</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>fdc</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>scsi</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>virtio</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>usb</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>sata</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </enum>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <enum name='model'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>virtio</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>virtio-transitional</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>virtio-non-transitional</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </enum>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     </disk>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     <graphics supported='yes'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <enum name='type'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>vnc</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>egl-headless</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>dbus</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </enum>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     </graphics>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     <video supported='yes'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <enum name='modelType'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>vga</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>cirrus</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>virtio</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>none</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>bochs</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>ramfb</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </enum>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     </video>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     <hostdev supported='yes'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <enum name='mode'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>subsystem</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </enum>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <enum name='startupPolicy'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>default</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>mandatory</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>requisite</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>optional</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </enum>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <enum name='subsysType'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>usb</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>pci</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>scsi</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </enum>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <enum name='capsType'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <enum name='pciBackend'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     </hostdev>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     <rng supported='yes'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <enum name='model'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>virtio</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>virtio-transitional</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>virtio-non-transitional</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </enum>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <enum name='backendModel'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>random</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>egd</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>builtin</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </enum>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     </rng>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     <filesystem supported='yes'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <enum name='driverType'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>path</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>handle</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>virtiofs</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </enum>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     </filesystem>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     <tpm supported='yes'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <enum name='model'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>tpm-tis</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>tpm-crb</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </enum>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <enum name='backendModel'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>emulator</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>external</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </enum>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <enum name='backendVersion'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>2.0</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </enum>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     </tpm>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     <redirdev supported='yes'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <enum name='bus'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>usb</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </enum>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     </redirdev>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     <channel supported='yes'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <enum name='type'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>pty</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>unix</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </enum>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     </channel>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     <crypto supported='yes'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <enum name='model'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <enum name='type'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>qemu</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </enum>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <enum name='backendModel'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>builtin</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </enum>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     </crypto>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     <interface supported='yes'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <enum name='backendType'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>default</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>passt</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </enum>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     </interface>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     <panic supported='yes'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <enum name='model'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>isa</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>hyperv</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </enum>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     </panic>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     <console supported='yes'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <enum name='type'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>null</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>vc</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>pty</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>dev</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>file</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>pipe</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>stdio</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>udp</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>tcp</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>unix</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>qemu-vdagent</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>dbus</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </enum>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     </console>
Nov 29 06:43:20 compute-0 nova_compute[251877]:   </devices>
Nov 29 06:43:20 compute-0 nova_compute[251877]:   <features>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     <gic supported='no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     <vmcoreinfo supported='yes'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     <genid supported='yes'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     <backingStoreInput supported='yes'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     <backup supported='yes'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     <async-teardown supported='yes'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     <ps2 supported='yes'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     <sev supported='no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     <sgx supported='no'/>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     <hyperv supported='yes'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <enum name='features'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>relaxed</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>vapic</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>spinlocks</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>vpindex</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>runtime</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>synic</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>stimer</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>reset</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>vendor_id</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>frequencies</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>reenlightenment</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>tlbflush</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>ipi</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>avic</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>emsr_bitmap</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>xmm_input</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </enum>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <defaults>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <spinlocks>4095</spinlocks>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <stimer_direct>on</stimer_direct>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <tlbflush_direct>on</tlbflush_direct>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <tlbflush_extended>on</tlbflush_extended>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </defaults>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     </hyperv>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     <launchSecurity supported='yes'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       <enum name='sectype'>
Nov 29 06:43:20 compute-0 nova_compute[251877]:         <value>tdx</value>
Nov 29 06:43:20 compute-0 nova_compute[251877]:       </enum>
Nov 29 06:43:20 compute-0 nova_compute[251877]:     </launchSecurity>
Nov 29 06:43:20 compute-0 nova_compute[251877]:   </features>
Nov 29 06:43:20 compute-0 nova_compute[251877]: </domainCapabilities>
Nov 29 06:43:20 compute-0 nova_compute[251877]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 29 06:43:20 compute-0 nova_compute[251877]: 2025-11-29 06:43:20.143 251881 DEBUG nova.virt.libvirt.host [None req-e073a6d6-a095-4d41-95db-624faa93ff07 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Nov 29 06:43:20 compute-0 nova_compute[251877]: 2025-11-29 06:43:20.144 251881 INFO nova.virt.libvirt.host [None req-e073a6d6-a095-4d41-95db-624faa93ff07 - - - - - -] Secure Boot support detected
Nov 29 06:43:20 compute-0 nova_compute[251877]: 2025-11-29 06:43:20.146 251881 INFO nova.virt.libvirt.driver [None req-e073a6d6-a095-4d41-95db-624faa93ff07 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Nov 29 06:43:20 compute-0 nova_compute[251877]: 2025-11-29 06:43:20.146 251881 INFO nova.virt.libvirt.driver [None req-e073a6d6-a095-4d41-95db-624faa93ff07 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Nov 29 06:43:20 compute-0 nova_compute[251877]: 2025-11-29 06:43:20.156 251881 DEBUG nova.virt.libvirt.driver [None req-e073a6d6-a095-4d41-95db-624faa93ff07 - - - - - -] cpu compare xml: <cpu match="exact">
Nov 29 06:43:20 compute-0 nova_compute[251877]:   <model>Nehalem</model>
Nov 29 06:43:20 compute-0 nova_compute[251877]: </cpu>
Nov 29 06:43:20 compute-0 nova_compute[251877]:  _compare_cpu /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10019
Nov 29 06:43:20 compute-0 nova_compute[251877]: 2025-11-29 06:43:20.159 251881 DEBUG nova.virt.libvirt.driver [None req-e073a6d6-a095-4d41-95db-624faa93ff07 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Nov 29 06:43:20 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:43:20 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:43:20 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:43:20.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:43:20 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:43:20 compute-0 nova_compute[251877]: 2025-11-29 06:43:20.441 251881 DEBUG nova.virt.libvirt.volume.mount [None req-e073a6d6-a095-4d41-95db-624faa93ff07 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Nov 29 06:43:20 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v927: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:43:20 compute-0 nova_compute[251877]: 2025-11-29 06:43:20.765 251881 INFO nova.virt.node [None req-e073a6d6-a095-4d41-95db-624faa93ff07 - - - - - -] Determined node identity 36ed0248-8d04-4532-95bb-daab89f12202 from /var/lib/nova/compute_id
Nov 29 06:43:21 compute-0 nova_compute[251877]: 2025-11-29 06:43:21.394 251881 WARNING nova.compute.manager [None req-e073a6d6-a095-4d41-95db-624faa93ff07 - - - - - -] Compute nodes ['36ed0248-8d04-4532-95bb-daab89f12202'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.
Nov 29 06:43:21 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:43:21 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:43:21 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:43:21.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:43:22 compute-0 ceph-mon[74654]: pgmap v927: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:43:22 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:43:22 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:43:22 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:43:22.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:43:22 compute-0 nova_compute[251877]: 2025-11-29 06:43:22.537 251881 INFO nova.compute.manager [None req-e073a6d6-a095-4d41-95db-624faa93ff07 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Nov 29 06:43:22 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v928: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:43:23 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:43:23 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:43:23 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:43:23.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:43:24 compute-0 nova_compute[251877]: 2025-11-29 06:43:24.065 251881 WARNING nova.compute.manager [None req-e073a6d6-a095-4d41-95db-624faa93ff07 - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Nov 29 06:43:24 compute-0 nova_compute[251877]: 2025-11-29 06:43:24.065 251881 DEBUG oslo_concurrency.lockutils [None req-e073a6d6-a095-4d41-95db-624faa93ff07 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 06:43:24 compute-0 nova_compute[251877]: 2025-11-29 06:43:24.065 251881 DEBUG oslo_concurrency.lockutils [None req-e073a6d6-a095-4d41-95db-624faa93ff07 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 06:43:24 compute-0 nova_compute[251877]: 2025-11-29 06:43:24.065 251881 DEBUG oslo_concurrency.lockutils [None req-e073a6d6-a095-4d41-95db-624faa93ff07 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 06:43:24 compute-0 nova_compute[251877]: 2025-11-29 06:43:24.066 251881 DEBUG nova.compute.resource_tracker [None req-e073a6d6-a095-4d41-95db-624faa93ff07 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 06:43:24 compute-0 nova_compute[251877]: 2025-11-29 06:43:24.066 251881 DEBUG oslo_concurrency.processutils [None req-e073a6d6-a095-4d41-95db-624faa93ff07 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 06:43:24 compute-0 ceph-mon[74654]: pgmap v928: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:43:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:43:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:43:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:43:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:43:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:43:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:43:24 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:43:24 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.002000057s ======
Nov 29 06:43:24 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:43:24.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000057s
Nov 29 06:43:24 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 06:43:24 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/766159143' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 06:43:24 compute-0 nova_compute[251877]: 2025-11-29 06:43:24.518 251881 DEBUG oslo_concurrency.processutils [None req-e073a6d6-a095-4d41-95db-624faa93ff07 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 06:43:24 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Nov 29 06:43:24 compute-0 systemd[1]: Started libvirt nodedev daemon.
Nov 29 06:43:24 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v929: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:43:24 compute-0 nova_compute[251877]: 2025-11-29 06:43:24.991 251881 WARNING nova.virt.libvirt.driver [None req-e073a6d6-a095-4d41-95db-624faa93ff07 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 06:43:24 compute-0 nova_compute[251877]: 2025-11-29 06:43:24.993 251881 DEBUG nova.compute.resource_tracker [None req-e073a6d6-a095-4d41-95db-624faa93ff07 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5211MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 06:43:24 compute-0 nova_compute[251877]: 2025-11-29 06:43:24.994 251881 DEBUG oslo_concurrency.lockutils [None req-e073a6d6-a095-4d41-95db-624faa93ff07 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 06:43:24 compute-0 nova_compute[251877]: 2025-11-29 06:43:24.994 251881 DEBUG oslo_concurrency.lockutils [None req-e073a6d6-a095-4d41-95db-624faa93ff07 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 06:43:25 compute-0 nova_compute[251877]: 2025-11-29 06:43:25.157 251881 WARNING nova.compute.resource_tracker [None req-e073a6d6-a095-4d41-95db-624faa93ff07 - - - - - -] No compute node record for compute-0.ctlplane.example.com:36ed0248-8d04-4532-95bb-daab89f12202: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host 36ed0248-8d04-4532-95bb-daab89f12202 could not be found.
Nov 29 06:43:25 compute-0 nova_compute[251877]: 2025-11-29 06:43:25.254 251881 INFO nova.compute.resource_tracker [None req-e073a6d6-a095-4d41-95db-624faa93ff07 - - - - - -] Compute node record created for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com with uuid: 36ed0248-8d04-4532-95bb-daab89f12202
Nov 29 06:43:25 compute-0 ceph-mon[74654]: from='client.? 192.168.122.102:0/1216257790' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 06:43:25 compute-0 ceph-mon[74654]: from='client.? 192.168.122.100:0/766159143' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 06:43:25 compute-0 ceph-mon[74654]: pgmap v929: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:43:25 compute-0 nova_compute[251877]: 2025-11-29 06:43:25.579 251881 DEBUG nova.compute.resource_tracker [None req-e073a6d6-a095-4d41-95db-624faa93ff07 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 06:43:25 compute-0 nova_compute[251877]: 2025-11-29 06:43:25.580 251881 DEBUG nova.compute.resource_tracker [None req-e073a6d6-a095-4d41-95db-624faa93ff07 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 06:43:25 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:43:25 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:43:25 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:43:25 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:43:25.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:43:25 compute-0 nova_compute[251877]: 2025-11-29 06:43:25.876 251881 INFO nova.scheduler.client.report [None req-e073a6d6-a095-4d41-95db-624faa93ff07 - - - - - -] [req-06598c54-fb62-4044-8dcf-489128907ffe] Created resource provider record via placement API for resource provider with UUID 36ed0248-8d04-4532-95bb-daab89f12202 and name compute-0.ctlplane.example.com.
Nov 29 06:43:25 compute-0 nova_compute[251877]: 2025-11-29 06:43:25.950 251881 DEBUG oslo_concurrency.processutils [None req-e073a6d6-a095-4d41-95db-624faa93ff07 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 06:43:26 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:43:26 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:43:26 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:43:26.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:43:26 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 06:43:26 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2410258570' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 06:43:26 compute-0 nova_compute[251877]: 2025-11-29 06:43:26.575 251881 DEBUG oslo_concurrency.processutils [None req-e073a6d6-a095-4d41-95db-624faa93ff07 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.625s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 06:43:26 compute-0 nova_compute[251877]: 2025-11-29 06:43:26.583 251881 DEBUG nova.virt.libvirt.host [None req-e073a6d6-a095-4d41-95db-624faa93ff07 - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Nov 29 06:43:26 compute-0 nova_compute[251877]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803
Nov 29 06:43:26 compute-0 nova_compute[251877]: 2025-11-29 06:43:26.583 251881 INFO nova.virt.libvirt.host [None req-e073a6d6-a095-4d41-95db-624faa93ff07 - - - - - -] kernel doesn't support AMD SEV
Nov 29 06:43:26 compute-0 nova_compute[251877]: 2025-11-29 06:43:26.584 251881 DEBUG nova.compute.provider_tree [None req-e073a6d6-a095-4d41-95db-624faa93ff07 - - - - - -] Updating inventory in ProviderTree for provider 36ed0248-8d04-4532-95bb-daab89f12202 with inventory: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 29 06:43:26 compute-0 nova_compute[251877]: 2025-11-29 06:43:26.585 251881 DEBUG nova.virt.libvirt.driver [None req-e073a6d6-a095-4d41-95db-624faa93ff07 - - - - - -] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 06:43:26 compute-0 nova_compute[251877]: 2025-11-29 06:43:26.588 251881 DEBUG nova.virt.libvirt.driver [None req-e073a6d6-a095-4d41-95db-624faa93ff07 - - - - - -] Libvirt baseline CPU <cpu>
Nov 29 06:43:26 compute-0 nova_compute[251877]:   <arch>x86_64</arch>
Nov 29 06:43:26 compute-0 nova_compute[251877]:   <model>Nehalem</model>
Nov 29 06:43:26 compute-0 nova_compute[251877]:   <vendor>AMD</vendor>
Nov 29 06:43:26 compute-0 nova_compute[251877]:   <topology sockets="8" cores="1" threads="1"/>
Nov 29 06:43:26 compute-0 nova_compute[251877]: </cpu>
Nov 29 06:43:26 compute-0 nova_compute[251877]:  _get_guest_baseline_cpu_features /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12537
Nov 29 06:43:26 compute-0 ceph-mon[74654]: from='client.? 192.168.122.101:0/3359582458' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 06:43:26 compute-0 ceph-mon[74654]: from='client.? 192.168.122.102:0/2300023521' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 06:43:26 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v930: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:43:26 compute-0 nova_compute[251877]: 2025-11-29 06:43:26.766 251881 DEBUG nova.scheduler.client.report [None req-e073a6d6-a095-4d41-95db-624faa93ff07 - - - - - -] Updated inventory for provider 36ed0248-8d04-4532-95bb-daab89f12202 with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Nov 29 06:43:26 compute-0 nova_compute[251877]: 2025-11-29 06:43:26.767 251881 DEBUG nova.compute.provider_tree [None req-e073a6d6-a095-4d41-95db-624faa93ff07 - - - - - -] Updating resource provider 36ed0248-8d04-4532-95bb-daab89f12202 generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Nov 29 06:43:26 compute-0 nova_compute[251877]: 2025-11-29 06:43:26.767 251881 DEBUG nova.compute.provider_tree [None req-e073a6d6-a095-4d41-95db-624faa93ff07 - - - - - -] Updating inventory in ProviderTree for provider 36ed0248-8d04-4532-95bb-daab89f12202 with inventory: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 29 06:43:27 compute-0 nova_compute[251877]: 2025-11-29 06:43:27.041 251881 DEBUG nova.compute.provider_tree [None req-e073a6d6-a095-4d41-95db-624faa93ff07 - - - - - -] Updating resource provider 36ed0248-8d04-4532-95bb-daab89f12202 generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Nov 29 06:43:27 compute-0 sshd-session[252303]: Received disconnect from 193.163.72.91 port 37830:11: Bye Bye [preauth]
Nov 29 06:43:27 compute-0 sshd-session[252303]: Disconnected from authenticating user root 193.163.72.91 port 37830 [preauth]
Nov 29 06:43:27 compute-0 nova_compute[251877]: 2025-11-29 06:43:27.405 251881 DEBUG nova.compute.resource_tracker [None req-e073a6d6-a095-4d41-95db-624faa93ff07 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 06:43:27 compute-0 nova_compute[251877]: 2025-11-29 06:43:27.406 251881 DEBUG oslo_concurrency.lockutils [None req-e073a6d6-a095-4d41-95db-624faa93ff07 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.412s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 06:43:27 compute-0 nova_compute[251877]: 2025-11-29 06:43:27.406 251881 DEBUG nova.service [None req-e073a6d6-a095-4d41-95db-624faa93ff07 - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182
Nov 29 06:43:27 compute-0 ceph-mon[74654]: from='client.? 192.168.122.100:0/2410258570' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 06:43:27 compute-0 ceph-mon[74654]: pgmap v930: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:43:27 compute-0 ceph-mon[74654]: from='client.? 192.168.122.101:0/570139792' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 06:43:27 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:43:27 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:43:27 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:43:27.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:43:28 compute-0 nova_compute[251877]: 2025-11-29 06:43:28.046 251881 DEBUG nova.service [None req-e073a6d6-a095-4d41-95db-624faa93ff07 - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199
Nov 29 06:43:28 compute-0 nova_compute[251877]: 2025-11-29 06:43:28.047 251881 DEBUG nova.servicegroup.drivers.db [None req-e073a6d6-a095-4d41-95db-624faa93ff07 - - - - - -] DB_Driver: join new ServiceGroup member compute-0.ctlplane.example.com to the compute group, service = <Service: host=compute-0.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44
Nov 29 06:43:28 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:43:28 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:43:28 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:43:28.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:43:28 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v931: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:43:29 compute-0 podman[252309]: 2025-11-29 06:43:29.173956551 +0000 UTC m=+0.134310750 container health_status 843911ed0b6203707f0633a7e737420fbf54d55170a2d9cdc86db1752ff76af8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 29 06:43:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 06:43:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 06:43:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 06:43:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 06:43:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 06:43:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 06:43:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 06:43:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 06:43:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 06:43:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 06:43:29 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:43:29 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:43:29 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:43:29.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:43:30 compute-0 ceph-mon[74654]: pgmap v931: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:43:30 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:43:30 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:43:30 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:43:30.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:43:30 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:43:30 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v932: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:43:31 compute-0 sshd-session[252329]: Received disconnect from 103.147.159.91 port 54550:11: Bye Bye [preauth]
Nov 29 06:43:31 compute-0 sshd-session[252329]: Disconnected from authenticating user root 103.147.159.91 port 54550 [preauth]
Nov 29 06:43:31 compute-0 sudo[252332]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:43:31 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:43:31 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:43:31 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:43:31.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:43:31 compute-0 sudo[252332]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:43:31 compute-0 sudo[252332]: pam_unix(sudo:session): session closed for user root
Nov 29 06:43:31 compute-0 ceph-mon[74654]: pgmap v932: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:43:31 compute-0 sudo[252357]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:43:31 compute-0 sudo[252357]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:43:31 compute-0 sudo[252357]: pam_unix(sudo:session): session closed for user root
Nov 29 06:43:32 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:43:32 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:43:32 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:43:32.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:43:32 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v933: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:43:33 compute-0 ceph-mon[74654]: pgmap v933: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:43:33 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:43:33 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:43:33 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:43:33.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:43:34 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:43:34 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:43:34 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:43:34.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:43:34 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v934: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:43:35 compute-0 sshd-session[252383]: Invalid user admin123 from 197.13.24.157 port 42920
Nov 29 06:43:35 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:43:35 compute-0 sshd-session[252383]: Received disconnect from 197.13.24.157 port 42920:11: Bye Bye [preauth]
Nov 29 06:43:35 compute-0 sshd-session[252383]: Disconnected from invalid user admin123 197.13.24.157 port 42920 [preauth]
Nov 29 06:43:35 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:43:35 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:43:35 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:43:35.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:43:36 compute-0 ceph-mon[74654]: pgmap v934: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:43:36 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:43:36 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:43:36 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:43:36.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:43:36 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v935: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:43:37 compute-0 ceph-mon[74654]: pgmap v935: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:43:37 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:43:37 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:43:37 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:43:37.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:43:38 compute-0 podman[252387]: 2025-11-29 06:43:38.123130462 +0000 UTC m=+0.076179190 container health_status 81ea2bcb89266a0110a379c2083d8cc042460d4a35c8ed3bf349dd1083925000 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 29 06:43:38 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:43:38 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:43:38 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:43:38.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:43:38 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v936: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:43:39 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:43:39 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:43:39 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:43:39.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:43:40 compute-0 ceph-mon[74654]: pgmap v936: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:43:40 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:43:40 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:43:40 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:43:40.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:43:40 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:43:40 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v937: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:43:41 compute-0 podman[252408]: 2025-11-29 06:43:41.127366231 +0000 UTC m=+0.099638393 container health_status b3f42e9a710907b47913576d27471d163da731262c1464357cff24681ce600c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 06:43:41 compute-0 ceph-mon[74654]: pgmap v937: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:43:41 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:43:41 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:43:41 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:43:41.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:43:42 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:43:42 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:43:42 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:43:42.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:43:42 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v938: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:43:43 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:43:43 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:43:43 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:43:43.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:43:43 compute-0 ceph-mon[74654]: pgmap v938: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:43:44 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:43:44 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:43:44 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:43:44.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:43:44 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v939: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:43:45 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:43:45 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:43:45 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:43:45 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:43:45.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:43:45 compute-0 ceph-mon[74654]: pgmap v939: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:43:46 compute-0 rsyslogd[1007]: imjournal from <np0005539508:ceph-mon>: begin to drop messages due to rate-limiting
Nov 29 06:43:46 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:43:46 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:43:46 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:43:46.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:43:46 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v940: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:43:47 compute-0 ceph-mon[74654]: pgmap v940: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:43:47 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:43:47 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:43:47 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:43:47.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:43:48 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:43:48 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:43:48 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:43:48.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:43:48 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v941: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:43:49 compute-0 ceph-mon[74654]: pgmap v941: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:43:49 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:43:49 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:43:49 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:43:49.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:43:50 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:43:50 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:43:50 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:43:50.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:43:50 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:43:50 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v942: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:43:50 compute-0 sshd-session[252436]: Connection closed by 101.47.163.116 port 42606 [preauth]
Nov 29 06:43:51 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:43:51 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:43:51 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:43:51.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:43:51 compute-0 ceph-mon[74654]: pgmap v942: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:43:51 compute-0 sudo[252444]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:43:51 compute-0 sudo[252444]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:43:51 compute-0 sudo[252444]: pam_unix(sudo:session): session closed for user root
Nov 29 06:43:52 compute-0 sudo[252469]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:43:52 compute-0 sudo[252469]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:43:52 compute-0 sudo[252469]: pam_unix(sudo:session): session closed for user root
Nov 29 06:43:52 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:43:52 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:43:52 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:43:52.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:43:52 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v943: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:43:53 compute-0 sshd-session[252442]: Received disconnect from 27.112.78.245 port 59292:11: Bye Bye [preauth]
Nov 29 06:43:53 compute-0 sshd-session[252442]: Disconnected from authenticating user root 27.112.78.245 port 59292 [preauth]
Nov 29 06:43:53 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:43:53 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:43:53 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:43:53.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:43:53 compute-0 ceph-mon[74654]: pgmap v943: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:43:54 compute-0 ceph-mgr[74948]: [balancer INFO root] Optimize plan auto_2025-11-29_06:43:54
Nov 29 06:43:54 compute-0 ceph-mgr[74948]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 06:43:54 compute-0 ceph-mgr[74948]: [balancer INFO root] do_upmap
Nov 29 06:43:54 compute-0 ceph-mgr[74948]: [balancer INFO root] pools ['default.rgw.control', '.mgr', 'default.rgw.meta', '.rgw.root', 'vms', 'volumes', 'backups', 'default.rgw.log', 'cephfs.cephfs.data', 'images', 'cephfs.cephfs.meta']
Nov 29 06:43:54 compute-0 ceph-mgr[74948]: [balancer INFO root] prepared 0/10 changes
Nov 29 06:43:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:43:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:43:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:43:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:43:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:43:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:43:54 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:43:54 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:43:54 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:43:54.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:43:54 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v944: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:43:54 compute-0 sshd-session[252495]: Received disconnect from 176.109.67.96 port 53552:11: Bye Bye [preauth]
Nov 29 06:43:54 compute-0 sshd-session[252495]: Disconnected from authenticating user root 176.109.67.96 port 53552 [preauth]
Nov 29 06:43:55 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:43:55 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:43:55 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:43:55 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:43:55.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:43:55 compute-0 ceph-mon[74654]: pgmap v944: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:43:56 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:43:56 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:43:56 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:43:56.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:43:56 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v945: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:43:57 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:43:57 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:43:57 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:43:57.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:43:57 compute-0 ceph-mon[74654]: pgmap v945: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:43:58 compute-0 nova_compute[251877]: 2025-11-29 06:43:58.049 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 06:43:58 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:43:58 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:43:58 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:43:58.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:43:58 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v946: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:43:59 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:43:59 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:43:59 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:43:59.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:44:00 compute-0 ceph-mon[74654]: pgmap v946: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:44:00 compute-0 podman[252500]: 2025-11-29 06:44:00.133185696 +0000 UTC m=+0.086648835 container health_status 843911ed0b6203707f0633a7e737420fbf54d55170a2d9cdc86db1752ff76af8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 06:44:00 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:44:00 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:44:00 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:44:00.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:44:00 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:44:00 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v947: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:44:01 compute-0 anacron[30913]: Job `cron.weekly' started
Nov 29 06:44:01 compute-0 anacron[30913]: Job `cron.weekly' terminated
Nov 29 06:44:01 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:44:01 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:44:01 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:44:01.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:44:02 compute-0 ceph-mon[74654]: pgmap v947: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:44:02 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:44:02 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:44:02 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:44:02.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:44:02 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v948: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:44:03 compute-0 sudo[252527]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:44:03 compute-0 sudo[252527]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:44:03 compute-0 sudo[252527]: pam_unix(sudo:session): session closed for user root
Nov 29 06:44:03 compute-0 ceph-mon[74654]: pgmap v948: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:44:03 compute-0 sudo[252552]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:44:03 compute-0 sudo[252552]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:44:03 compute-0 sudo[252552]: pam_unix(sudo:session): session closed for user root
Nov 29 06:44:03 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:44:03 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:44:03 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:44:03.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:44:03 compute-0 sudo[252577]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:44:03 compute-0 sudo[252577]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:44:03 compute-0 sudo[252577]: pam_unix(sudo:session): session closed for user root
Nov 29 06:44:03 compute-0 sudo[252602]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 06:44:03 compute-0 sudo[252602]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:44:04 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:44:04 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:44:04 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:44:04.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:44:04 compute-0 sudo[252602]: pam_unix(sudo:session): session closed for user root
Nov 29 06:44:04 compute-0 sshd-session[252525]: Invalid user admin123 from 34.92.81.41 port 42436
Nov 29 06:44:04 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 06:44:04 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:44:04 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 06:44:04 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 06:44:04 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 06:44:04 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:44:04 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev bf9bb797-fe23-4263-9d62-7c777ff699c0 does not exist
Nov 29 06:44:04 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev d7f5fe1c-6f67-4783-a6d2-aa47e5720fc9 does not exist
Nov 29 06:44:04 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev 3d99d144-1bd7-4030-b2cf-0275918dbd57 does not exist
Nov 29 06:44:04 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 06:44:04 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 06:44:04 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 06:44:04 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 06:44:04 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 06:44:04 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:44:04 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:44:04 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 06:44:04 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:44:04 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 06:44:04 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 06:44:04 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:44:04 compute-0 sudo[252658]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:44:04 compute-0 sudo[252658]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:44:04 compute-0 sudo[252658]: pam_unix(sudo:session): session closed for user root
Nov 29 06:44:04 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v949: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:44:04 compute-0 sshd-session[252525]: Received disconnect from 34.92.81.41 port 42436:11: Bye Bye [preauth]
Nov 29 06:44:04 compute-0 sshd-session[252525]: Disconnected from invalid user admin123 34.92.81.41 port 42436 [preauth]
Nov 29 06:44:04 compute-0 sudo[252683]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:44:04 compute-0 sudo[252683]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:44:04 compute-0 sudo[252683]: pam_unix(sudo:session): session closed for user root
Nov 29 06:44:04 compute-0 sudo[252708]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:44:04 compute-0 sudo[252708]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:44:04 compute-0 sudo[252708]: pam_unix(sudo:session): session closed for user root
Nov 29 06:44:04 compute-0 sudo[252733]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Nov 29 06:44:04 compute-0 sudo[252733]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:44:05 compute-0 podman[252800]: 2025-11-29 06:44:05.359224266 +0000 UTC m=+0.047643555 container create 53b61004928fadd3289f95a9721962b7898bfd06fd2392dd379250c28c3fcb87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_nash, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 29 06:44:05 compute-0 systemd[1]: Started libpod-conmon-53b61004928fadd3289f95a9721962b7898bfd06fd2392dd379250c28c3fcb87.scope.
Nov 29 06:44:05 compute-0 podman[252800]: 2025-11-29 06:44:05.3370323 +0000 UTC m=+0.025451679 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:44:05 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:44:05 compute-0 podman[252800]: 2025-11-29 06:44:05.463418316 +0000 UTC m=+0.151837695 container init 53b61004928fadd3289f95a9721962b7898bfd06fd2392dd379250c28c3fcb87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_nash, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 29 06:44:05 compute-0 podman[252800]: 2025-11-29 06:44:05.469619061 +0000 UTC m=+0.158038360 container start 53b61004928fadd3289f95a9721962b7898bfd06fd2392dd379250c28c3fcb87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_nash, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 06:44:05 compute-0 podman[252800]: 2025-11-29 06:44:05.473101419 +0000 UTC m=+0.161520818 container attach 53b61004928fadd3289f95a9721962b7898bfd06fd2392dd379250c28c3fcb87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_nash, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 06:44:05 compute-0 adoring_nash[252816]: 167 167
Nov 29 06:44:05 compute-0 systemd[1]: libpod-53b61004928fadd3289f95a9721962b7898bfd06fd2392dd379250c28c3fcb87.scope: Deactivated successfully.
Nov 29 06:44:05 compute-0 podman[252800]: 2025-11-29 06:44:05.47844968 +0000 UTC m=+0.166868989 container died 53b61004928fadd3289f95a9721962b7898bfd06fd2392dd379250c28c3fcb87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_nash, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 29 06:44:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-8f78c322c71a75c73f2c56ff0b138a72431d15b00399ea5e3f92f8141cc2406d-merged.mount: Deactivated successfully.
Nov 29 06:44:05 compute-0 podman[252800]: 2025-11-29 06:44:05.53126304 +0000 UTC m=+0.219682339 container remove 53b61004928fadd3289f95a9721962b7898bfd06fd2392dd379250c28c3fcb87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_nash, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 06:44:05 compute-0 systemd[1]: libpod-conmon-53b61004928fadd3289f95a9721962b7898bfd06fd2392dd379250c28c3fcb87.scope: Deactivated successfully.
Nov 29 06:44:05 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:44:05 compute-0 ceph-mon[74654]: pgmap v949: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:44:05 compute-0 podman[252840]: 2025-11-29 06:44:05.711665679 +0000 UTC m=+0.038590699 container create c578610d6855817bd93e03d31eefe3828ad7795f198448f56b670409241ba005 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_mestorf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True)
Nov 29 06:44:05 compute-0 systemd[1]: Started libpod-conmon-c578610d6855817bd93e03d31eefe3828ad7795f198448f56b670409241ba005.scope.
Nov 29 06:44:05 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:44:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0525cd74af35d5283276da629132a9b8506bb1078cbf4f0d49c88b122e11d3df/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 06:44:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0525cd74af35d5283276da629132a9b8506bb1078cbf4f0d49c88b122e11d3df/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:44:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0525cd74af35d5283276da629132a9b8506bb1078cbf4f0d49c88b122e11d3df/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:44:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0525cd74af35d5283276da629132a9b8506bb1078cbf4f0d49c88b122e11d3df/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 06:44:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0525cd74af35d5283276da629132a9b8506bb1078cbf4f0d49c88b122e11d3df/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 06:44:05 compute-0 podman[252840]: 2025-11-29 06:44:05.695550335 +0000 UTC m=+0.022475375 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:44:05 compute-0 podman[252840]: 2025-11-29 06:44:05.795622298 +0000 UTC m=+0.122547358 container init c578610d6855817bd93e03d31eefe3828ad7795f198448f56b670409241ba005 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_mestorf, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 06:44:05 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:44:05 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:44:05 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:44:05.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:44:05 compute-0 podman[252840]: 2025-11-29 06:44:05.812911726 +0000 UTC m=+0.139836756 container start c578610d6855817bd93e03d31eefe3828ad7795f198448f56b670409241ba005 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_mestorf, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 06:44:05 compute-0 podman[252840]: 2025-11-29 06:44:05.817353601 +0000 UTC m=+0.144278651 container attach c578610d6855817bd93e03d31eefe3828ad7795f198448f56b670409241ba005 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_mestorf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 06:44:06 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:44:06 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:44:06 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:44:06.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:44:06 compute-0 clever_mestorf[252857]: --> passed data devices: 0 physical, 1 LVM
Nov 29 06:44:06 compute-0 clever_mestorf[252857]: --> relative data size: 1.0
Nov 29 06:44:06 compute-0 clever_mestorf[252857]: --> All data devices are unavailable
Nov 29 06:44:06 compute-0 systemd[1]: libpod-c578610d6855817bd93e03d31eefe3828ad7795f198448f56b670409241ba005.scope: Deactivated successfully.
Nov 29 06:44:06 compute-0 podman[252840]: 2025-11-29 06:44:06.592801029 +0000 UTC m=+0.919726099 container died c578610d6855817bd93e03d31eefe3828ad7795f198448f56b670409241ba005 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_mestorf, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True)
Nov 29 06:44:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-0525cd74af35d5283276da629132a9b8506bb1078cbf4f0d49c88b122e11d3df-merged.mount: Deactivated successfully.
Nov 29 06:44:06 compute-0 podman[252840]: 2025-11-29 06:44:06.66410067 +0000 UTC m=+0.991025700 container remove c578610d6855817bd93e03d31eefe3828ad7795f198448f56b670409241ba005 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_mestorf, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 06:44:06 compute-0 systemd[1]: libpod-conmon-c578610d6855817bd93e03d31eefe3828ad7795f198448f56b670409241ba005.scope: Deactivated successfully.
Nov 29 06:44:06 compute-0 sudo[252733]: pam_unix(sudo:session): session closed for user root
Nov 29 06:44:06 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v950: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:44:06 compute-0 sudo[252884]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:44:06 compute-0 sudo[252884]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:44:06 compute-0 sudo[252884]: pam_unix(sudo:session): session closed for user root
Nov 29 06:44:06 compute-0 sudo[252909]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:44:06 compute-0 sudo[252909]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:44:06 compute-0 sudo[252909]: pam_unix(sudo:session): session closed for user root
Nov 29 06:44:06 compute-0 sudo[252934]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:44:06 compute-0 sudo[252934]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:44:06 compute-0 sudo[252934]: pam_unix(sudo:session): session closed for user root
Nov 29 06:44:07 compute-0 sudo[252959]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -- lvm list --format json
Nov 29 06:44:07 compute-0 sudo[252959]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:44:07 compute-0 nova_compute[251877]: 2025-11-29 06:44:07.029 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 06:44:07 compute-0 podman[253024]: 2025-11-29 06:44:07.458191054 +0000 UTC m=+0.064764208 container create 1da43af33b9c1766647e043c1300de2a2b25160f02832d9e4b63a0b0254d90ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_dewdney, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 06:44:07 compute-0 systemd[1]: Started libpod-conmon-1da43af33b9c1766647e043c1300de2a2b25160f02832d9e4b63a0b0254d90ab.scope.
Nov 29 06:44:07 compute-0 podman[253024]: 2025-11-29 06:44:07.431650645 +0000 UTC m=+0.038223869 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:44:07 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:44:07 compute-0 podman[253024]: 2025-11-29 06:44:07.550367915 +0000 UTC m=+0.156941049 container init 1da43af33b9c1766647e043c1300de2a2b25160f02832d9e4b63a0b0254d90ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_dewdney, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 06:44:07 compute-0 podman[253024]: 2025-11-29 06:44:07.556975851 +0000 UTC m=+0.163548975 container start 1da43af33b9c1766647e043c1300de2a2b25160f02832d9e4b63a0b0254d90ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_dewdney, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 06:44:07 compute-0 podman[253024]: 2025-11-29 06:44:07.560287904 +0000 UTC m=+0.166861038 container attach 1da43af33b9c1766647e043c1300de2a2b25160f02832d9e4b63a0b0254d90ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_dewdney, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 06:44:07 compute-0 dreamy_dewdney[253042]: 167 167
Nov 29 06:44:07 compute-0 systemd[1]: libpod-1da43af33b9c1766647e043c1300de2a2b25160f02832d9e4b63a0b0254d90ab.scope: Deactivated successfully.
Nov 29 06:44:07 compute-0 podman[253024]: 2025-11-29 06:44:07.564661608 +0000 UTC m=+0.171234732 container died 1da43af33b9c1766647e043c1300de2a2b25160f02832d9e4b63a0b0254d90ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_dewdney, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 06:44:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-09ee14b9d28ac7e20daeed4a7cbfc3d01e59789eada8d5756f6bc27029bb6c2f-merged.mount: Deactivated successfully.
Nov 29 06:44:07 compute-0 podman[253024]: 2025-11-29 06:44:07.615719148 +0000 UTC m=+0.222292272 container remove 1da43af33b9c1766647e043c1300de2a2b25160f02832d9e4b63a0b0254d90ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_dewdney, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 29 06:44:07 compute-0 systemd[1]: libpod-conmon-1da43af33b9c1766647e043c1300de2a2b25160f02832d9e4b63a0b0254d90ab.scope: Deactivated successfully.
Nov 29 06:44:07 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:44:07 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:44:07 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:44:07.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:44:07 compute-0 podman[253066]: 2025-11-29 06:44:07.824351415 +0000 UTC m=+0.045781923 container create fba4deeb8da64444abbe24ae3fa4837c3d05b18ea2bc9d9c210502efdaa35aef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_villani, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 06:44:07 compute-0 ceph-mon[74654]: pgmap v950: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:44:07 compute-0 systemd[1]: Started libpod-conmon-fba4deeb8da64444abbe24ae3fa4837c3d05b18ea2bc9d9c210502efdaa35aef.scope.
Nov 29 06:44:07 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:44:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/665e554f5e76a29f9a9abd51c84671c82a1bbffd1600a680ad5a84224c1aca19/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 06:44:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/665e554f5e76a29f9a9abd51c84671c82a1bbffd1600a680ad5a84224c1aca19/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:44:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/665e554f5e76a29f9a9abd51c84671c82a1bbffd1600a680ad5a84224c1aca19/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:44:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/665e554f5e76a29f9a9abd51c84671c82a1bbffd1600a680ad5a84224c1aca19/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 06:44:07 compute-0 podman[253066]: 2025-11-29 06:44:07.804362101 +0000 UTC m=+0.025792649 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:44:07 compute-0 podman[253066]: 2025-11-29 06:44:07.915528927 +0000 UTC m=+0.136959455 container init fba4deeb8da64444abbe24ae3fa4837c3d05b18ea2bc9d9c210502efdaa35aef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_villani, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 06:44:07 compute-0 podman[253066]: 2025-11-29 06:44:07.9220245 +0000 UTC m=+0.143455028 container start fba4deeb8da64444abbe24ae3fa4837c3d05b18ea2bc9d9c210502efdaa35aef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_villani, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 06:44:07 compute-0 podman[253066]: 2025-11-29 06:44:07.92522013 +0000 UTC m=+0.146650658 container attach fba4deeb8da64444abbe24ae3fa4837c3d05b18ea2bc9d9c210502efdaa35aef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_villani, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 06:44:08 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:44:08 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:44:08 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:44:08.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:44:08 compute-0 interesting_villani[253083]: {
Nov 29 06:44:08 compute-0 interesting_villani[253083]:     "1": [
Nov 29 06:44:08 compute-0 interesting_villani[253083]:         {
Nov 29 06:44:08 compute-0 interesting_villani[253083]:             "devices": [
Nov 29 06:44:08 compute-0 interesting_villani[253083]:                 "/dev/loop3"
Nov 29 06:44:08 compute-0 interesting_villani[253083]:             ],
Nov 29 06:44:08 compute-0 interesting_villani[253083]:             "lv_name": "ceph_lv0",
Nov 29 06:44:08 compute-0 interesting_villani[253083]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 06:44:08 compute-0 interesting_villani[253083]:             "lv_size": "7511998464",
Nov 29 06:44:08 compute-0 interesting_villani[253083]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=336ec58c-893b-528f-a0c1-6ed1196bc047,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=91f280f1-e534-4adc-bf70-98711580c2dd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 06:44:08 compute-0 interesting_villani[253083]:             "lv_uuid": "G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP",
Nov 29 06:44:08 compute-0 interesting_villani[253083]:             "name": "ceph_lv0",
Nov 29 06:44:08 compute-0 interesting_villani[253083]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 06:44:08 compute-0 interesting_villani[253083]:             "tags": {
Nov 29 06:44:08 compute-0 interesting_villani[253083]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 06:44:08 compute-0 interesting_villani[253083]:                 "ceph.block_uuid": "G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP",
Nov 29 06:44:08 compute-0 interesting_villani[253083]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 06:44:08 compute-0 interesting_villani[253083]:                 "ceph.cluster_fsid": "336ec58c-893b-528f-a0c1-6ed1196bc047",
Nov 29 06:44:08 compute-0 interesting_villani[253083]:                 "ceph.cluster_name": "ceph",
Nov 29 06:44:08 compute-0 interesting_villani[253083]:                 "ceph.crush_device_class": "",
Nov 29 06:44:08 compute-0 interesting_villani[253083]:                 "ceph.encrypted": "0",
Nov 29 06:44:08 compute-0 interesting_villani[253083]:                 "ceph.osd_fsid": "91f280f1-e534-4adc-bf70-98711580c2dd",
Nov 29 06:44:08 compute-0 interesting_villani[253083]:                 "ceph.osd_id": "1",
Nov 29 06:44:08 compute-0 interesting_villani[253083]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 06:44:08 compute-0 interesting_villani[253083]:                 "ceph.type": "block",
Nov 29 06:44:08 compute-0 interesting_villani[253083]:                 "ceph.vdo": "0"
Nov 29 06:44:08 compute-0 interesting_villani[253083]:             },
Nov 29 06:44:08 compute-0 interesting_villani[253083]:             "type": "block",
Nov 29 06:44:08 compute-0 interesting_villani[253083]:             "vg_name": "ceph_vg0"
Nov 29 06:44:08 compute-0 interesting_villani[253083]:         }
Nov 29 06:44:08 compute-0 interesting_villani[253083]:     ]
Nov 29 06:44:08 compute-0 interesting_villani[253083]: }
Nov 29 06:44:08 compute-0 systemd[1]: libpod-fba4deeb8da64444abbe24ae3fa4837c3d05b18ea2bc9d9c210502efdaa35aef.scope: Deactivated successfully.
Nov 29 06:44:08 compute-0 podman[253066]: 2025-11-29 06:44:08.715560897 +0000 UTC m=+0.936991445 container died fba4deeb8da64444abbe24ae3fa4837c3d05b18ea2bc9d9c210502efdaa35aef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_villani, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 06:44:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-665e554f5e76a29f9a9abd51c84671c82a1bbffd1600a680ad5a84224c1aca19-merged.mount: Deactivated successfully.
Nov 29 06:44:08 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v951: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:44:08 compute-0 podman[253066]: 2025-11-29 06:44:08.78583385 +0000 UTC m=+1.007264358 container remove fba4deeb8da64444abbe24ae3fa4837c3d05b18ea2bc9d9c210502efdaa35aef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_villani, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 29 06:44:08 compute-0 systemd[1]: libpod-conmon-fba4deeb8da64444abbe24ae3fa4837c3d05b18ea2bc9d9c210502efdaa35aef.scope: Deactivated successfully.
Nov 29 06:44:08 compute-0 sudo[252959]: pam_unix(sudo:session): session closed for user root
Nov 29 06:44:08 compute-0 podman[253092]: 2025-11-29 06:44:08.831231641 +0000 UTC m=+0.086069430 container health_status 81ea2bcb89266a0110a379c2083d8cc042460d4a35c8ed3bf349dd1083925000 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent)
Nov 29 06:44:08 compute-0 sudo[253121]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:44:08 compute-0 sudo[253121]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:44:08 compute-0 sudo[253121]: pam_unix(sudo:session): session closed for user root
Nov 29 06:44:08 compute-0 sudo[253146]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:44:08 compute-0 sudo[253146]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:44:08 compute-0 sudo[253146]: pam_unix(sudo:session): session closed for user root
Nov 29 06:44:08 compute-0 sshd-session[253047]: Received disconnect from 118.193.39.127 port 49824:11: Bye Bye [preauth]
Nov 29 06:44:08 compute-0 sshd-session[253047]: Disconnected from authenticating user root 118.193.39.127 port 49824 [preauth]
Nov 29 06:44:08 compute-0 sudo[253172]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:44:08 compute-0 sudo[253172]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:44:09 compute-0 sudo[253172]: pam_unix(sudo:session): session closed for user root
Nov 29 06:44:09 compute-0 sudo[253197]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -- raw list --format json
Nov 29 06:44:09 compute-0 sudo[253197]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:44:09 compute-0 podman[253262]: 2025-11-29 06:44:09.505869644 +0000 UTC m=+0.064269104 container create 49cb4c1bde3e0d74f1925675a37fdc79c8de26f68b625c697e3e57d7c2b17493 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_beaver, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 29 06:44:09 compute-0 systemd[1]: Started libpod-conmon-49cb4c1bde3e0d74f1925675a37fdc79c8de26f68b625c697e3e57d7c2b17493.scope.
Nov 29 06:44:09 compute-0 podman[253262]: 2025-11-29 06:44:09.479035767 +0000 UTC m=+0.037435287 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:44:09 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:44:09 compute-0 podman[253262]: 2025-11-29 06:44:09.610852826 +0000 UTC m=+0.169252336 container init 49cb4c1bde3e0d74f1925675a37fdc79c8de26f68b625c697e3e57d7c2b17493 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_beaver, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 29 06:44:09 compute-0 podman[253262]: 2025-11-29 06:44:09.623216385 +0000 UTC m=+0.181615845 container start 49cb4c1bde3e0d74f1925675a37fdc79c8de26f68b625c697e3e57d7c2b17493 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_beaver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 06:44:09 compute-0 podman[253262]: 2025-11-29 06:44:09.628201506 +0000 UTC m=+0.186600986 container attach 49cb4c1bde3e0d74f1925675a37fdc79c8de26f68b625c697e3e57d7c2b17493 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_beaver, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 06:44:09 compute-0 thirsty_beaver[253278]: 167 167
Nov 29 06:44:09 compute-0 systemd[1]: libpod-49cb4c1bde3e0d74f1925675a37fdc79c8de26f68b625c697e3e57d7c2b17493.scope: Deactivated successfully.
Nov 29 06:44:09 compute-0 podman[253262]: 2025-11-29 06:44:09.633749522 +0000 UTC m=+0.192148992 container died 49cb4c1bde3e0d74f1925675a37fdc79c8de26f68b625c697e3e57d7c2b17493 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_beaver, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 06:44:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-cb930dc25e0344e4a81f83b515fe0d4ce5d7264924d1d17c3b07fa53df931602-merged.mount: Deactivated successfully.
Nov 29 06:44:09 compute-0 podman[253262]: 2025-11-29 06:44:09.686466209 +0000 UTC m=+0.244865669 container remove 49cb4c1bde3e0d74f1925675a37fdc79c8de26f68b625c697e3e57d7c2b17493 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_beaver, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 06:44:09 compute-0 systemd[1]: libpod-conmon-49cb4c1bde3e0d74f1925675a37fdc79c8de26f68b625c697e3e57d7c2b17493.scope: Deactivated successfully.
Nov 29 06:44:09 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:44:09 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:44:09 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:44:09.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:44:09 compute-0 ceph-mon[74654]: pgmap v951: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:44:09 compute-0 podman[253303]: 2025-11-29 06:44:09.900503438 +0000 UTC m=+0.049080966 container create 6e84192ed0af8f4092a70435c192e9d8d7ea09c29619165c0a939fb9cc011593 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_torvalds, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 29 06:44:09 compute-0 systemd[1]: Started libpod-conmon-6e84192ed0af8f4092a70435c192e9d8d7ea09c29619165c0a939fb9cc011593.scope.
Nov 29 06:44:09 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:44:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52c4dda9a4d2b6f9d83b559ed6b3a81a0ce7c79545cf74bc5c3040facd38fe92/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 06:44:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52c4dda9a4d2b6f9d83b559ed6b3a81a0ce7c79545cf74bc5c3040facd38fe92/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:44:09 compute-0 podman[253303]: 2025-11-29 06:44:09.878215059 +0000 UTC m=+0.026792627 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:44:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52c4dda9a4d2b6f9d83b559ed6b3a81a0ce7c79545cf74bc5c3040facd38fe92/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:44:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52c4dda9a4d2b6f9d83b559ed6b3a81a0ce7c79545cf74bc5c3040facd38fe92/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 06:44:09 compute-0 podman[253303]: 2025-11-29 06:44:09.991715681 +0000 UTC m=+0.140293229 container init 6e84192ed0af8f4092a70435c192e9d8d7ea09c29619165c0a939fb9cc011593 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_torvalds, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 06:44:09 compute-0 podman[253303]: 2025-11-29 06:44:09.998940715 +0000 UTC m=+0.147518273 container start 6e84192ed0af8f4092a70435c192e9d8d7ea09c29619165c0a939fb9cc011593 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_torvalds, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 06:44:10 compute-0 podman[253303]: 2025-11-29 06:44:10.00372977 +0000 UTC m=+0.152307288 container attach 6e84192ed0af8f4092a70435c192e9d8d7ea09c29619165c0a939fb9cc011593 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_torvalds, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 06:44:10 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:44:10 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:44:10 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:44:10.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:44:10 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:44:10 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v952: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:44:10 compute-0 zen_torvalds[253320]: {
Nov 29 06:44:10 compute-0 zen_torvalds[253320]:     "91f280f1-e534-4adc-bf70-98711580c2dd": {
Nov 29 06:44:10 compute-0 zen_torvalds[253320]:         "ceph_fsid": "336ec58c-893b-528f-a0c1-6ed1196bc047",
Nov 29 06:44:10 compute-0 zen_torvalds[253320]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 06:44:10 compute-0 zen_torvalds[253320]:         "osd_id": 1,
Nov 29 06:44:10 compute-0 zen_torvalds[253320]:         "osd_uuid": "91f280f1-e534-4adc-bf70-98711580c2dd",
Nov 29 06:44:10 compute-0 zen_torvalds[253320]:         "type": "bluestore"
Nov 29 06:44:10 compute-0 zen_torvalds[253320]:     }
Nov 29 06:44:10 compute-0 zen_torvalds[253320]: }
Nov 29 06:44:10 compute-0 systemd[1]: libpod-6e84192ed0af8f4092a70435c192e9d8d7ea09c29619165c0a939fb9cc011593.scope: Deactivated successfully.
Nov 29 06:44:10 compute-0 podman[253303]: 2025-11-29 06:44:10.915958637 +0000 UTC m=+1.064536155 container died 6e84192ed0af8f4092a70435c192e9d8d7ea09c29619165c0a939fb9cc011593 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_torvalds, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 06:44:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-52c4dda9a4d2b6f9d83b559ed6b3a81a0ce7c79545cf74bc5c3040facd38fe92-merged.mount: Deactivated successfully.
Nov 29 06:44:10 compute-0 podman[253303]: 2025-11-29 06:44:10.976433093 +0000 UTC m=+1.125010621 container remove 6e84192ed0af8f4092a70435c192e9d8d7ea09c29619165c0a939fb9cc011593 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_torvalds, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 29 06:44:10 compute-0 systemd[1]: libpod-conmon-6e84192ed0af8f4092a70435c192e9d8d7ea09c29619165c0a939fb9cc011593.scope: Deactivated successfully.
Nov 29 06:44:11 compute-0 sudo[253197]: pam_unix(sudo:session): session closed for user root
Nov 29 06:44:11 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 06:44:11 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:44:11 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 06:44:11 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:44:11 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev 6e11691c-5ff7-4222-8db6-b6885ecd3892 does not exist
Nov 29 06:44:11 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev 1458ffa7-b438-4c6a-b9ec-86a07875606b does not exist
Nov 29 06:44:11 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev 6a191afc-c706-4661-8915-94c86a68e99a does not exist
Nov 29 06:44:11 compute-0 sudo[253354]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:44:11 compute-0 sudo[253354]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:44:11 compute-0 sudo[253354]: pam_unix(sudo:session): session closed for user root
Nov 29 06:44:11 compute-0 sudo[253380]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 06:44:11 compute-0 sudo[253380]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:44:11 compute-0 sudo[253380]: pam_unix(sudo:session): session closed for user root
Nov 29 06:44:11 compute-0 podman[253379]: 2025-11-29 06:44:11.302286756 +0000 UTC m=+0.119868742 container health_status b3f42e9a710907b47913576d27471d163da731262c1464357cff24681ce600c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 06:44:11 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:44:11 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:44:11 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:44:11.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:44:12 compute-0 ceph-mon[74654]: pgmap v952: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:44:12 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:44:12 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:44:12 compute-0 sudo[253429]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:44:12 compute-0 sudo[253429]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:44:12 compute-0 sudo[253429]: pam_unix(sudo:session): session closed for user root
Nov 29 06:44:12 compute-0 sudo[253454]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:44:12 compute-0 sudo[253454]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:44:12 compute-0 sudo[253454]: pam_unix(sudo:session): session closed for user root
Nov 29 06:44:12 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:44:12 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:44:12 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:44:12.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:44:12 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v953: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:44:12 compute-0 nova_compute[251877]: 2025-11-29 06:44:12.960 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 06:44:12 compute-0 nova_compute[251877]: 2025-11-29 06:44:12.961 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 06:44:12 compute-0 nova_compute[251877]: 2025-11-29 06:44:12.962 251881 DEBUG nova.compute.manager [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 06:44:12 compute-0 nova_compute[251877]: 2025-11-29 06:44:12.962 251881 DEBUG nova.compute.manager [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 06:44:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 06:44:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:44:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 06:44:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:44:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:44:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:44:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:44:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:44:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:44:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:44:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:44:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:44:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 06:44:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:44:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:44:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:44:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Nov 29 06:44:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:44:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 06:44:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:44:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:44:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:44:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 06:44:13 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:44:13 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.002000056s ======
Nov 29 06:44:13 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:44:13.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000056s
Nov 29 06:44:13 compute-0 sshd-session[253480]: Invalid user ftptest from 103.143.238.173 port 38544
Nov 29 06:44:14 compute-0 sshd-session[253480]: Received disconnect from 103.143.238.173 port 38544:11: Bye Bye [preauth]
Nov 29 06:44:14 compute-0 sshd-session[253480]: Disconnected from invalid user ftptest 103.143.238.173 port 38544 [preauth]
Nov 29 06:44:14 compute-0 ceph-mon[74654]: pgmap v953: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:44:14 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:44:14 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:44:14 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:44:14.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:44:14 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v954: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:44:15 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:44:15 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:44:15 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:44:15 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:44:15.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:44:16 compute-0 ceph-mon[74654]: pgmap v954: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:44:16 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:44:16 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:44:16 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:44:16.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:44:16 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v955: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:44:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:44:17.232 157767 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 06:44:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:44:17.233 157767 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 06:44:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:44:17.233 157767 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 06:44:17 compute-0 ceph-mon[74654]: pgmap v955: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:44:17 compute-0 nova_compute[251877]: 2025-11-29 06:44:17.390 251881 WARNING oslo.service.loopingcall [-] Function 'nova.servicegroup.drivers.db.DbDriver._report_state' run outlasted interval by 4.34 sec
Nov 29 06:44:17 compute-0 nova_compute[251877]: 2025-11-29 06:44:17.460 251881 DEBUG nova.compute.manager [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 06:44:17 compute-0 nova_compute[251877]: 2025-11-29 06:44:17.461 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 06:44:17 compute-0 nova_compute[251877]: 2025-11-29 06:44:17.461 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 06:44:17 compute-0 nova_compute[251877]: 2025-11-29 06:44:17.461 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 06:44:17 compute-0 nova_compute[251877]: 2025-11-29 06:44:17.462 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 06:44:17 compute-0 nova_compute[251877]: 2025-11-29 06:44:17.462 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 06:44:17 compute-0 nova_compute[251877]: 2025-11-29 06:44:17.462 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 06:44:17 compute-0 nova_compute[251877]: 2025-11-29 06:44:17.462 251881 DEBUG nova.compute.manager [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 06:44:17 compute-0 nova_compute[251877]: 2025-11-29 06:44:17.462 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 06:44:17 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:44:17 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:44:17 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:44:17.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:44:18 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:44:18 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:44:18 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:44:18.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:44:18 compute-0 sshd-session[253483]: Received disconnect from 103.63.25.115 port 45496:11: Bye Bye [preauth]
Nov 29 06:44:18 compute-0 sshd-session[253483]: Disconnected from authenticating user root 103.63.25.115 port 45496 [preauth]
Nov 29 06:44:18 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v956: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:44:19 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:44:19 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:44:19 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:44:19.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:44:19 compute-0 ceph-mon[74654]: pgmap v956: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:44:20 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:44:20 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:44:20 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:44:20.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:44:20 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:44:20 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v957: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:44:21 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:44:21 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:44:21 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:44:21.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:44:22 compute-0 sshd-session[253488]: Invalid user user8 from 162.214.92.14 port 51424
Nov 29 06:44:22 compute-0 sshd-session[253488]: Received disconnect from 162.214.92.14 port 51424:11: Bye Bye [preauth]
Nov 29 06:44:22 compute-0 sshd-session[253488]: Disconnected from invalid user user8 162.214.92.14 port 51424 [preauth]
Nov 29 06:44:22 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:44:22 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:44:22 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:44:22.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:44:22 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v958: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:44:22 compute-0 ceph-mon[74654]: pgmap v957: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:44:23 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:44:23 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:44:23 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:44:23.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:44:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:44:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:44:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:44:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:44:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:44:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:44:24 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:44:24 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:44:24 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:44:24.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:44:24 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v959: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:44:25 compute-0 ceph-mon[74654]: pgmap v958: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:44:25 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:44:25 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:44:25 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:44:25 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:44:25.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:44:26 compute-0 ceph-mon[74654]: pgmap v959: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:44:26 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:44:26 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:44:26 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:44:26.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:44:26 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v960: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:44:27 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:44:27 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:44:27 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:44:27.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:44:28 compute-0 ceph-mon[74654]: pgmap v960: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:44:28 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:44:28 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:44:28 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:44:28.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:44:28 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v961: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:44:29 compute-0 ceph-mon[74654]: pgmap v961: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:44:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 06:44:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 06:44:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 06:44:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 06:44:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 06:44:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 06:44:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 06:44:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 06:44:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 06:44:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 06:44:29 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:44:29 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:44:29 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:44:29.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:44:30 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:44:30 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:44:30 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:44:30.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:44:30 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:44:30 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v962: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:44:31 compute-0 podman[253496]: 2025-11-29 06:44:31.142556846 +0000 UTC m=+0.091984526 container health_status 843911ed0b6203707f0633a7e737420fbf54d55170a2d9cdc86db1752ff76af8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 29 06:44:31 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:44:31 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:44:31 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:44:31.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:44:32 compute-0 ceph-mon[74654]: pgmap v962: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:44:32 compute-0 sudo[253516]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:44:32 compute-0 sudo[253516]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:44:32 compute-0 sudo[253516]: pam_unix(sudo:session): session closed for user root
Nov 29 06:44:32 compute-0 sudo[253541]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:44:32 compute-0 sudo[253541]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:44:32 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:44:32 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:44:32 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:44:32.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:44:32 compute-0 sudo[253541]: pam_unix(sudo:session): session closed for user root
Nov 29 06:44:32 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v963: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:44:33 compute-0 ceph-mon[74654]: pgmap v963: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:44:33 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:44:33 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:44:33 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:44:33.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:44:34 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:44:34 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:44:34 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:44:34.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:44:34 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v964: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:44:35 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:44:35 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:44:35 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:44:35 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:44:35.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:44:36 compute-0 nova_compute[251877]: 2025-11-29 06:44:36.104 251881 DEBUG oslo_concurrency.lockutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 06:44:36 compute-0 nova_compute[251877]: 2025-11-29 06:44:36.105 251881 DEBUG oslo_concurrency.lockutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 06:44:36 compute-0 nova_compute[251877]: 2025-11-29 06:44:36.105 251881 DEBUG oslo_concurrency.lockutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 06:44:36 compute-0 nova_compute[251877]: 2025-11-29 06:44:36.106 251881 DEBUG nova.compute.resource_tracker [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 06:44:36 compute-0 nova_compute[251877]: 2025-11-29 06:44:36.107 251881 DEBUG oslo_concurrency.processutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 06:44:36 compute-0 ceph-mon[74654]: pgmap v964: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:44:36 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:44:36 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:44:36 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:44:36.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:44:36 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 06:44:36 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2267221102' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 06:44:36 compute-0 nova_compute[251877]: 2025-11-29 06:44:36.539 251881 DEBUG oslo_concurrency.processutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 06:44:36 compute-0 nova_compute[251877]: 2025-11-29 06:44:36.735 251881 WARNING nova.virt.libvirt.driver [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 06:44:36 compute-0 nova_compute[251877]: 2025-11-29 06:44:36.737 251881 DEBUG nova.compute.resource_tracker [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5199MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 06:44:36 compute-0 nova_compute[251877]: 2025-11-29 06:44:36.738 251881 DEBUG oslo_concurrency.lockutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 06:44:36 compute-0 nova_compute[251877]: 2025-11-29 06:44:36.738 251881 DEBUG oslo_concurrency.lockutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 06:44:36 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v965: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:44:37 compute-0 ceph-mon[74654]: from='client.? 192.168.122.100:0/2267221102' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 06:44:37 compute-0 ceph-mon[74654]: from='client.? 192.168.122.101:0/2284150235' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 06:44:37 compute-0 ceph-mon[74654]: from='client.? 192.168.122.102:0/1970020033' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 06:44:37 compute-0 nova_compute[251877]: 2025-11-29 06:44:37.737 251881 WARNING oslo.service.loopingcall [-] Function 'nova.servicegroup.drivers.db.DbDriver._report_state' run outlasted interval by 0.35 sec
Nov 29 06:44:37 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:44:37 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:44:37 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:44:37.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:44:38 compute-0 ceph-mon[74654]: pgmap v965: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:44:38 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:44:38 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:44:38 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:44:38.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:44:38 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 06:44:38 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1438398855' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 06:44:38 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 06:44:38 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1438398855' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 06:44:38 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v966: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:44:39 compute-0 podman[253594]: 2025-11-29 06:44:39.143168304 +0000 UTC m=+0.101702620 container health_status 81ea2bcb89266a0110a379c2083d8cc042460d4a35c8ed3bf349dd1083925000 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Nov 29 06:44:39 compute-0 ceph-mon[74654]: from='client.? 192.168.122.10:0/1438398855' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 06:44:39 compute-0 ceph-mon[74654]: from='client.? 192.168.122.10:0/1438398855' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 06:44:39 compute-0 ceph-mon[74654]: pgmap v966: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:44:39 compute-0 sshd-session[253494]: error: kex_exchange_identification: read: Connection timed out
Nov 29 06:44:39 compute-0 sshd-session[253494]: banner exchange: Connection from 58.210.98.130 port 37342: Connection timed out
Nov 29 06:44:39 compute-0 sshd-session[253591]: Received disconnect from 193.163.72.91 port 40046:11: Bye Bye [preauth]
Nov 29 06:44:39 compute-0 sshd-session[253591]: Disconnected from authenticating user root 193.163.72.91 port 40046 [preauth]
Nov 29 06:44:39 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:44:39 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:44:39 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:44:39.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:44:40 compute-0 ceph-mon[74654]: from='client.? 192.168.122.10:0/4273924820' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 06:44:40 compute-0 ceph-mon[74654]: from='client.? 192.168.122.10:0/4273924820' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 06:44:40 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:44:40 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:44:40 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:44:40.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:44:40 compute-0 nova_compute[251877]: 2025-11-29 06:44:40.478 251881 DEBUG nova.compute.resource_tracker [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 06:44:40 compute-0 nova_compute[251877]: 2025-11-29 06:44:40.478 251881 DEBUG nova.compute.resource_tracker [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 06:44:40 compute-0 nova_compute[251877]: 2025-11-29 06:44:40.559 251881 DEBUG oslo_concurrency.processutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 06:44:40 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:44:40 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v967: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:44:40 compute-0 ceph-mon[74654]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #36. Immutable memtables: 0.
Nov 29 06:44:40 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:44:40.923968) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 06:44:40 compute-0 ceph-mon[74654]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 36
Nov 29 06:44:40 compute-0 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764398680924426, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 2489, "num_deletes": 509, "total_data_size": 4317133, "memory_usage": 4401264, "flush_reason": "Manual Compaction"}
Nov 29 06:44:40 compute-0 ceph-mon[74654]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #37: started
Nov 29 06:44:41 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 06:44:41 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3344413637' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 06:44:41 compute-0 nova_compute[251877]: 2025-11-29 06:44:41.131 251881 DEBUG oslo_concurrency.processutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.572s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 06:44:41 compute-0 nova_compute[251877]: 2025-11-29 06:44:41.139 251881 DEBUG nova.compute.provider_tree [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Inventory has not changed in ProviderTree for provider: 36ed0248-8d04-4532-95bb-daab89f12202 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 06:44:41 compute-0 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764398681154020, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 37, "file_size": 4239117, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 15167, "largest_seqno": 17655, "table_properties": {"data_size": 4228438, "index_size": 6469, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3141, "raw_key_size": 23481, "raw_average_key_size": 18, "raw_value_size": 4205271, "raw_average_value_size": 3380, "num_data_blocks": 289, "num_entries": 1244, "num_filter_entries": 1244, "num_deletions": 509, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764398425, "oldest_key_time": 1764398425, "file_creation_time": 1764398680, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cb6c8f8f-b3b4-4901-9b8e-6f9d7b0da908", "db_session_id": "VL4WOW4AK06DDHF5VQBP", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Nov 29 06:44:41 compute-0 ceph-mon[74654]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 229778 microseconds, and 15145 cpu microseconds.
Nov 29 06:44:41 compute-0 ceph-mon[74654]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 06:44:41 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:44:41.154078) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #37: 4239117 bytes OK
Nov 29 06:44:41 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:44:41.154098) [db/memtable_list.cc:519] [default] Level-0 commit table #37 started
Nov 29 06:44:41 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:44:41.198737) [db/memtable_list.cc:722] [default] Level-0 commit table #37: memtable #1 done
Nov 29 06:44:41 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:44:41.198807) EVENT_LOG_v1 {"time_micros": 1764398681198794, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 06:44:41 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:44:41.198837) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 06:44:41 compute-0 ceph-mon[74654]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 4306149, prev total WAL file size 4306149, number of live WAL files 2.
Nov 29 06:44:41 compute-0 ceph-mon[74654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000033.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 06:44:41 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:44:41.200408) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323535' seq:0, type:0; will stop at (end)
Nov 29 06:44:41 compute-0 ceph-mon[74654]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 06:44:41 compute-0 ceph-mon[74654]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [37(4139KB)], [35(9302KB)]
Nov 29 06:44:41 compute-0 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764398681200470, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [37], "files_L6": [35], "score": -1, "input_data_size": 13764620, "oldest_snapshot_seqno": -1}
Nov 29 06:44:41 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:44:41 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:44:41 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:44:41.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:44:42 compute-0 podman[253637]: 2025-11-29 06:44:42.177198192 +0000 UTC m=+0.132860799 container health_status b3f42e9a710907b47913576d27471d163da731262c1464357cff24681ce600c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, config_id=ovn_controller, managed_by=edpm_ansible)
Nov 29 06:44:42 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:44:42 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:44:42 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:44:42.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:44:42 compute-0 ceph-mon[74654]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #38: 4720 keys, 11177980 bytes, temperature: kUnknown
Nov 29 06:44:42 compute-0 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764398682548358, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 38, "file_size": 11177980, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11142539, "index_size": 22531, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11845, "raw_key_size": 118415, "raw_average_key_size": 25, "raw_value_size": 11053307, "raw_average_value_size": 2341, "num_data_blocks": 935, "num_entries": 4720, "num_filter_entries": 4720, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764396963, "oldest_key_time": 0, "file_creation_time": 1764398681, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cb6c8f8f-b3b4-4901-9b8e-6f9d7b0da908", "db_session_id": "VL4WOW4AK06DDHF5VQBP", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Nov 29 06:44:42 compute-0 ceph-mon[74654]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 06:44:42 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:44:42.549034) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 11177980 bytes
Nov 29 06:44:42 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:44:42.572819) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 10.2 rd, 8.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(4.0, 9.1 +0.0 blob) out(10.7 +0.0 blob), read-write-amplify(5.9) write-amplify(2.6) OK, records in: 5754, records dropped: 1034 output_compression: NoCompression
Nov 29 06:44:42 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:44:42.572868) EVENT_LOG_v1 {"time_micros": 1764398682572848, "job": 16, "event": "compaction_finished", "compaction_time_micros": 1348193, "compaction_time_cpu_micros": 41863, "output_level": 6, "num_output_files": 1, "total_output_size": 11177980, "num_input_records": 5754, "num_output_records": 4720, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 06:44:42 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:44:41.200271) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 06:44:42 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:44:42.573124) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 06:44:42 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:44:42.573134) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 06:44:42 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:44:42.573139) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 06:44:42 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:44:42.573142) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 06:44:42 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:44:42.573146) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 06:44:42 compute-0 ceph-mon[74654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000037.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 06:44:42 compute-0 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764398682575477, "job": 0, "event": "table_file_deletion", "file_number": 37}
Nov 29 06:44:42 compute-0 ceph-mon[74654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 06:44:42 compute-0 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764398682578385, "job": 0, "event": "table_file_deletion", "file_number": 35}
Nov 29 06:44:42 compute-0 ceph-mon[74654]: pgmap v967: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:44:42 compute-0 ceph-mon[74654]: from='client.? 192.168.122.101:0/945539473' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 06:44:42 compute-0 ceph-mon[74654]: from='client.? 192.168.122.102:0/10673063' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 06:44:42 compute-0 ceph-mon[74654]: from='client.? 192.168.122.100:0/3344413637' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 06:44:42 compute-0 nova_compute[251877]: 2025-11-29 06:44:42.749 251881 DEBUG nova.scheduler.client.report [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Inventory has not changed for provider 36ed0248-8d04-4532-95bb-daab89f12202 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 06:44:42 compute-0 nova_compute[251877]: 2025-11-29 06:44:42.751 251881 DEBUG nova.compute.resource_tracker [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 06:44:42 compute-0 nova_compute[251877]: 2025-11-29 06:44:42.751 251881 DEBUG oslo_concurrency.lockutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 6.013s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 06:44:42 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v968: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:44:43 compute-0 ceph-mon[74654]: pgmap v968: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:44:43 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:44:43 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:44:43 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:44:43.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:44:44 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:44:44 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:44:44 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:44:44.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:44:44 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v969: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:44:45 compute-0 ceph-mon[74654]: pgmap v969: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:44:45 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:44:45 compute-0 sshd-session[253665]: Received disconnect from 197.13.24.157 port 60860:11: Bye Bye [preauth]
Nov 29 06:44:45 compute-0 sshd-session[253665]: Disconnected from authenticating user root 197.13.24.157 port 60860 [preauth]
Nov 29 06:44:45 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:44:45 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:44:45 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:44:45.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:44:46 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:44:46 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:44:46 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:44:46.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:44:46 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v970: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:44:47 compute-0 ceph-mon[74654]: pgmap v970: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:44:47 compute-0 sshd-session[253668]: Invalid user mcserver from 49.247.35.31 port 4200
Nov 29 06:44:47 compute-0 sshd-session[253668]: Received disconnect from 49.247.35.31 port 4200:11: Bye Bye [preauth]
Nov 29 06:44:47 compute-0 sshd-session[253668]: Disconnected from invalid user mcserver 49.247.35.31 port 4200 [preauth]
Nov 29 06:44:47 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:44:47 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:44:47 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:44:47.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:44:48 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:44:48 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:44:48 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:44:48.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:44:48 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v971: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:44:49 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:44:49 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:44:49 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:44:49.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:44:50 compute-0 ceph-mon[74654]: pgmap v971: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:44:50 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:44:50 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:44:50 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:44:50.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:44:50 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v972: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:44:51 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:44:51 compute-0 ceph-mon[74654]: pgmap v972: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:44:51 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:44:51 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:44:51 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:44:51.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:44:52 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:44:52 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:44:52 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:44:52.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:44:52 compute-0 sudo[253673]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:44:52 compute-0 sudo[253673]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:44:52 compute-0 sudo[253673]: pam_unix(sudo:session): session closed for user root
Nov 29 06:44:52 compute-0 sudo[253698]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:44:52 compute-0 sudo[253698]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:44:52 compute-0 sudo[253698]: pam_unix(sudo:session): session closed for user root
Nov 29 06:44:52 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v973: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:44:53 compute-0 ceph-mon[74654]: pgmap v973: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:44:53 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:44:53 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:44:53 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:44:53.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:44:54 compute-0 ceph-mgr[74948]: [balancer INFO root] Optimize plan auto_2025-11-29_06:44:54
Nov 29 06:44:54 compute-0 ceph-mgr[74948]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 06:44:54 compute-0 ceph-mgr[74948]: [balancer INFO root] do_upmap
Nov 29 06:44:54 compute-0 ceph-mgr[74948]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.data', 'images', 'default.rgw.meta', 'volumes', 'default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.meta', '.rgw.root', 'backups', '.mgr']
Nov 29 06:44:54 compute-0 ceph-mgr[74948]: [balancer INFO root] prepared 0/10 changes
Nov 29 06:44:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:44:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:44:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:44:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:44:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:44:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:44:54 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:44:54 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:44:54 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:44:54.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:44:54 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v974: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:44:55 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:44:55 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:44:55 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:44:55.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:44:56 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:44:56 compute-0 ceph-mon[74654]: pgmap v974: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:44:56 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:44:56 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:44:56 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:44:56.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:44:56 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v975: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:44:57 compute-0 ceph-mon[74654]: pgmap v975: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:44:57 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:44:57 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:44:57 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:44:57.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:44:58 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:44:58 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:44:58 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:44:58.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:44:58 compute-0 ceph-mon[74654]: from='client.? 192.168.122.10:0/1524599570' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 06:44:58 compute-0 ceph-mon[74654]: from='client.? 192.168.122.10:0/1524599570' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 06:44:58 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v976: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:44:59 compute-0 ceph-mon[74654]: pgmap v976: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:44:59 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:44:59 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:44:59 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:44:59.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:45:00 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:45:00 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:45:00 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:45:00.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:45:00 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v977: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:45:01 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:45:01 compute-0 ceph-mon[74654]: pgmap v977: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:45:01 compute-0 sshd[185364]: Timeout before authentication for connection from 45.78.221.93 to 38.102.83.22, pid = 249812
Nov 29 06:45:01 compute-0 podman[253728]: 2025-11-29 06:45:01.665728123 +0000 UTC m=+0.086305506 container health_status 843911ed0b6203707f0633a7e737420fbf54d55170a2d9cdc86db1752ff76af8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 06:45:01 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:45:01 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:45:01 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:45:01.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:45:02 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:45:02 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:45:02 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:45:02.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:45:02 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v978: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:45:02 compute-0 ceph-mon[74654]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #39. Immutable memtables: 0.
Nov 29 06:45:02 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:45:02.878073) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 06:45:02 compute-0 ceph-mon[74654]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 39
Nov 29 06:45:02 compute-0 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764398702878328, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 433, "num_deletes": 251, "total_data_size": 405451, "memory_usage": 414744, "flush_reason": "Manual Compaction"}
Nov 29 06:45:02 compute-0 ceph-mon[74654]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #40: started
Nov 29 06:45:02 compute-0 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764398702901931, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 40, "file_size": 401811, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 17656, "largest_seqno": 18088, "table_properties": {"data_size": 399311, "index_size": 600, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 6105, "raw_average_key_size": 18, "raw_value_size": 394334, "raw_average_value_size": 1209, "num_data_blocks": 28, "num_entries": 326, "num_filter_entries": 326, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764398682, "oldest_key_time": 1764398682, "file_creation_time": 1764398702, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cb6c8f8f-b3b4-4901-9b8e-6f9d7b0da908", "db_session_id": "VL4WOW4AK06DDHF5VQBP", "orig_file_number": 40, "seqno_to_time_mapping": "N/A"}}
Nov 29 06:45:02 compute-0 ceph-mon[74654]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 23894 microseconds, and 3410 cpu microseconds.
Nov 29 06:45:02 compute-0 ceph-mon[74654]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 06:45:02 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:45:02.901971) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #40: 401811 bytes OK
Nov 29 06:45:02 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:45:02.901990) [db/memtable_list.cc:519] [default] Level-0 commit table #40 started
Nov 29 06:45:02 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:45:02.903571) [db/memtable_list.cc:722] [default] Level-0 commit table #40: memtable #1 done
Nov 29 06:45:02 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:45:02.903588) EVENT_LOG_v1 {"time_micros": 1764398702903583, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 06:45:02 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:45:02.903603) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 06:45:02 compute-0 ceph-mon[74654]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 402805, prev total WAL file size 402805, number of live WAL files 2.
Nov 29 06:45:02 compute-0 ceph-mon[74654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000036.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 06:45:02 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:45:02.904150) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031303034' seq:72057594037927935, type:22 .. '7061786F730031323536' seq:0, type:0; will stop at (end)
Nov 29 06:45:02 compute-0 ceph-mon[74654]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 06:45:02 compute-0 ceph-mon[74654]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [40(392KB)], [38(10MB)]
Nov 29 06:45:02 compute-0 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764398702904203, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [40], "files_L6": [38], "score": -1, "input_data_size": 11579791, "oldest_snapshot_seqno": -1}
Nov 29 06:45:03 compute-0 ceph-mon[74654]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #41: 4535 keys, 9460525 bytes, temperature: kUnknown
Nov 29 06:45:03 compute-0 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764398703011242, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 41, "file_size": 9460525, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9427806, "index_size": 20257, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11397, "raw_key_size": 115242, "raw_average_key_size": 25, "raw_value_size": 9343158, "raw_average_value_size": 2060, "num_data_blocks": 832, "num_entries": 4535, "num_filter_entries": 4535, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764396963, "oldest_key_time": 0, "file_creation_time": 1764398702, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cb6c8f8f-b3b4-4901-9b8e-6f9d7b0da908", "db_session_id": "VL4WOW4AK06DDHF5VQBP", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}}
Nov 29 06:45:03 compute-0 ceph-mon[74654]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 06:45:03 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:45:03.011935) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 9460525 bytes
Nov 29 06:45:03 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:45:03.013312) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 107.8 rd, 88.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 10.7 +0.0 blob) out(9.0 +0.0 blob), read-write-amplify(52.4) write-amplify(23.5) OK, records in: 5046, records dropped: 511 output_compression: NoCompression
Nov 29 06:45:03 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:45:03.013351) EVENT_LOG_v1 {"time_micros": 1764398703013333, "job": 18, "event": "compaction_finished", "compaction_time_micros": 107429, "compaction_time_cpu_micros": 36659, "output_level": 6, "num_output_files": 1, "total_output_size": 9460525, "num_input_records": 5046, "num_output_records": 4535, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 06:45:03 compute-0 ceph-mon[74654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000040.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 06:45:03 compute-0 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764398703014407, "job": 18, "event": "table_file_deletion", "file_number": 40}
Nov 29 06:45:03 compute-0 ceph-mon[74654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 06:45:03 compute-0 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764398703018821, "job": 18, "event": "table_file_deletion", "file_number": 38}
Nov 29 06:45:03 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:45:02.904033) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 06:45:03 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:45:03.019147) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 06:45:03 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:45:03.019156) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 06:45:03 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:45:03.019160) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 06:45:03 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:45:03.019164) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 06:45:03 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:45:03.019168) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 06:45:03 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:45:03 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:45:03 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:45:03.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:45:03 compute-0 ceph-mon[74654]: pgmap v978: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:45:04 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:45:04 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:45:04 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:45:04.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:45:04 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v979: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:45:05 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:45:05 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:45:05 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:45:05.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:45:05 compute-0 ceph-mon[74654]: pgmap v979: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:45:06 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:45:06 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:45:06 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:45:06 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:45:06.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:45:06 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v980: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:45:07 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:45:07 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:45:07 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:45:07.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:45:08 compute-0 ceph-mon[74654]: pgmap v980: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:45:08 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:45:08 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:45:08 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:45:08.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:45:08 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v981: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:45:09 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:45:09 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:45:09 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:45:09.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:45:10 compute-0 ceph-mon[74654]: pgmap v981: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:45:10 compute-0 podman[253754]: 2025-11-29 06:45:10.10727981 +0000 UTC m=+0.070030506 container health_status 81ea2bcb89266a0110a379c2083d8cc042460d4a35c8ed3bf349dd1083925000 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 29 06:45:10 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:45:10 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:45:10 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:45:10.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:45:10 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v982: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:45:11 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:45:11 compute-0 sshd-session[253752]: Received disconnect from 103.31.39.143 port 37378:11: Bye Bye [preauth]
Nov 29 06:45:11 compute-0 sshd-session[253752]: Disconnected from authenticating user root 103.31.39.143 port 37378 [preauth]
Nov 29 06:45:11 compute-0 sudo[253774]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:45:11 compute-0 sudo[253774]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:45:11 compute-0 sudo[253774]: pam_unix(sudo:session): session closed for user root
Nov 29 06:45:11 compute-0 sudo[253799]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:45:11 compute-0 sudo[253799]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:45:11 compute-0 sudo[253799]: pam_unix(sudo:session): session closed for user root
Nov 29 06:45:11 compute-0 sudo[253824]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:45:11 compute-0 sudo[253824]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:45:11 compute-0 sudo[253824]: pam_unix(sudo:session): session closed for user root
Nov 29 06:45:11 compute-0 sudo[253849]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 06:45:11 compute-0 sudo[253849]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:45:11 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:45:11 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:45:11 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:45:11.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:45:12 compute-0 ceph-mon[74654]: pgmap v982: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:45:12 compute-0 sudo[253849]: pam_unix(sudo:session): session closed for user root
Nov 29 06:45:12 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 06:45:12 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:45:12 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 06:45:12 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 06:45:12 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 06:45:12 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:45:12 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev bbec638a-0d2a-470e-b6b3-959da0f1ff02 does not exist
Nov 29 06:45:12 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev 1700f077-6a54-4add-a93c-bd7f20ae904e does not exist
Nov 29 06:45:12 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev c29bca68-b54b-4112-b69a-2aa27033e4c7 does not exist
Nov 29 06:45:12 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 06:45:12 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 06:45:12 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 06:45:12 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 06:45:12 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 06:45:12 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:45:12 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:45:12 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:45:12 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:45:12.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:45:12 compute-0 sudo[253905]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:45:12 compute-0 sudo[253905]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:45:12 compute-0 sudo[253905]: pam_unix(sudo:session): session closed for user root
Nov 29 06:45:12 compute-0 sudo[253936]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:45:12 compute-0 sudo[253936]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:45:12 compute-0 sudo[253936]: pam_unix(sudo:session): session closed for user root
Nov 29 06:45:12 compute-0 podman[253929]: 2025-11-29 06:45:12.664306851 +0000 UTC m=+0.160374194 container health_status b3f42e9a710907b47913576d27471d163da731262c1464357cff24681ce600c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 29 06:45:12 compute-0 sudo[253978]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:45:12 compute-0 sudo[253978]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:45:12 compute-0 sudo[253978]: pam_unix(sudo:session): session closed for user root
Nov 29 06:45:12 compute-0 sudo[253981]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:45:12 compute-0 sudo[253981]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:45:12 compute-0 sudo[253981]: pam_unix(sudo:session): session closed for user root
Nov 29 06:45:12 compute-0 sudo[254030]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Nov 29 06:45:12 compute-0 sudo[254030]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:45:12 compute-0 sudo[254038]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:45:12 compute-0 sudo[254038]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:45:12 compute-0 sudo[254038]: pam_unix(sudo:session): session closed for user root
Nov 29 06:45:12 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v983: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:45:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 06:45:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:45:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 06:45:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:45:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:45:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:45:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:45:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:45:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:45:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:45:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:45:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:45:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 06:45:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:45:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:45:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:45:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Nov 29 06:45:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:45:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 06:45:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:45:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:45:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:45:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 06:45:13 compute-0 podman[254120]: 2025-11-29 06:45:13.110398187 +0000 UTC m=+0.027989811 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:45:13 compute-0 podman[254120]: 2025-11-29 06:45:13.312952472 +0000 UTC m=+0.230544106 container create e2ed8f57b4ef4408b21717888085d84ff8cd6d4f80d2b838aaf856c11cae8cdf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_shtern, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 06:45:13 compute-0 systemd[1]: Started libpod-conmon-e2ed8f57b4ef4408b21717888085d84ff8cd6d4f80d2b838aaf856c11cae8cdf.scope.
Nov 29 06:45:13 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:45:13 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:45:13 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 06:45:13 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:45:13 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 06:45:13 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 06:45:13 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:45:13 compute-0 podman[254120]: 2025-11-29 06:45:13.393331309 +0000 UTC m=+0.310922963 container init e2ed8f57b4ef4408b21717888085d84ff8cd6d4f80d2b838aaf856c11cae8cdf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_shtern, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 06:45:13 compute-0 podman[254120]: 2025-11-29 06:45:13.401708996 +0000 UTC m=+0.319300610 container start e2ed8f57b4ef4408b21717888085d84ff8cd6d4f80d2b838aaf856c11cae8cdf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_shtern, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 06:45:13 compute-0 podman[254120]: 2025-11-29 06:45:13.405719429 +0000 UTC m=+0.323311093 container attach e2ed8f57b4ef4408b21717888085d84ff8cd6d4f80d2b838aaf856c11cae8cdf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_shtern, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 29 06:45:13 compute-0 systemd[1]: libpod-e2ed8f57b4ef4408b21717888085d84ff8cd6d4f80d2b838aaf856c11cae8cdf.scope: Deactivated successfully.
Nov 29 06:45:13 compute-0 cool_shtern[254137]: 167 167
Nov 29 06:45:13 compute-0 conmon[254137]: conmon e2ed8f57b4ef4408b217 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e2ed8f57b4ef4408b21717888085d84ff8cd6d4f80d2b838aaf856c11cae8cdf.scope/container/memory.events
Nov 29 06:45:13 compute-0 podman[254120]: 2025-11-29 06:45:13.408910859 +0000 UTC m=+0.326502453 container died e2ed8f57b4ef4408b21717888085d84ff8cd6d4f80d2b838aaf856c11cae8cdf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_shtern, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 06:45:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-5e14e021b6abd537b97a646d46cf2561fa782132d23dcde09d20630c6e8afd50-merged.mount: Deactivated successfully.
Nov 29 06:45:13 compute-0 podman[254120]: 2025-11-29 06:45:13.45785724 +0000 UTC m=+0.375448844 container remove e2ed8f57b4ef4408b21717888085d84ff8cd6d4f80d2b838aaf856c11cae8cdf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_shtern, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 29 06:45:13 compute-0 systemd[1]: libpod-conmon-e2ed8f57b4ef4408b21717888085d84ff8cd6d4f80d2b838aaf856c11cae8cdf.scope: Deactivated successfully.
Nov 29 06:45:13 compute-0 podman[254159]: 2025-11-29 06:45:13.7382634 +0000 UTC m=+0.108330376 container create fa7f21aa0dd1f7cfb7bdecd15b86d539397409a579256a0c9311fd7b70ed98ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_northcutt, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 06:45:13 compute-0 podman[254159]: 2025-11-29 06:45:13.674222724 +0000 UTC m=+0.044289750 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:45:13 compute-0 systemd[1]: Started libpod-conmon-fa7f21aa0dd1f7cfb7bdecd15b86d539397409a579256a0c9311fd7b70ed98ac.scope.
Nov 29 06:45:13 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:45:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aef55c35989372ba301884e8a69860dbec3c08c9cde1a0a7b79cad3e15d3d2f0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 06:45:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aef55c35989372ba301884e8a69860dbec3c08c9cde1a0a7b79cad3e15d3d2f0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:45:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aef55c35989372ba301884e8a69860dbec3c08c9cde1a0a7b79cad3e15d3d2f0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:45:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aef55c35989372ba301884e8a69860dbec3c08c9cde1a0a7b79cad3e15d3d2f0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 06:45:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aef55c35989372ba301884e8a69860dbec3c08c9cde1a0a7b79cad3e15d3d2f0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 06:45:13 compute-0 podman[254159]: 2025-11-29 06:45:13.848427809 +0000 UTC m=+0.218494805 container init fa7f21aa0dd1f7cfb7bdecd15b86d539397409a579256a0c9311fd7b70ed98ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_northcutt, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 06:45:13 compute-0 podman[254159]: 2025-11-29 06:45:13.860594192 +0000 UTC m=+0.230661168 container start fa7f21aa0dd1f7cfb7bdecd15b86d539397409a579256a0c9311fd7b70ed98ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_northcutt, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 06:45:13 compute-0 podman[254159]: 2025-11-29 06:45:13.865418388 +0000 UTC m=+0.235485354 container attach fa7f21aa0dd1f7cfb7bdecd15b86d539397409a579256a0c9311fd7b70ed98ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_northcutt, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 29 06:45:13 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:45:13 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:45:13 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:45:13.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:45:14 compute-0 ceph-mon[74654]: pgmap v983: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:45:14 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:45:14 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:45:14 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:45:14.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:45:14 compute-0 upbeat_northcutt[254177]: --> passed data devices: 0 physical, 1 LVM
Nov 29 06:45:14 compute-0 upbeat_northcutt[254177]: --> relative data size: 1.0
Nov 29 06:45:14 compute-0 upbeat_northcutt[254177]: --> All data devices are unavailable
Nov 29 06:45:14 compute-0 systemd[1]: libpod-fa7f21aa0dd1f7cfb7bdecd15b86d539397409a579256a0c9311fd7b70ed98ac.scope: Deactivated successfully.
Nov 29 06:45:14 compute-0 podman[254159]: 2025-11-29 06:45:14.768760064 +0000 UTC m=+1.138827040 container died fa7f21aa0dd1f7cfb7bdecd15b86d539397409a579256a0c9311fd7b70ed98ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_northcutt, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 06:45:14 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v984: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:45:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-aef55c35989372ba301884e8a69860dbec3c08c9cde1a0a7b79cad3e15d3d2f0-merged.mount: Deactivated successfully.
Nov 29 06:45:15 compute-0 podman[254159]: 2025-11-29 06:45:15.479506606 +0000 UTC m=+1.849573582 container remove fa7f21aa0dd1f7cfb7bdecd15b86d539397409a579256a0c9311fd7b70ed98ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_northcutt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 06:45:15 compute-0 ceph-mon[74654]: pgmap v984: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:45:15 compute-0 sudo[254030]: pam_unix(sudo:session): session closed for user root
Nov 29 06:45:15 compute-0 systemd[1]: libpod-conmon-fa7f21aa0dd1f7cfb7bdecd15b86d539397409a579256a0c9311fd7b70ed98ac.scope: Deactivated successfully.
Nov 29 06:45:15 compute-0 sudo[254205]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:45:15 compute-0 sudo[254205]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:45:15 compute-0 sudo[254205]: pam_unix(sudo:session): session closed for user root
Nov 29 06:45:15 compute-0 sudo[254233]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:45:15 compute-0 sudo[254233]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:45:15 compute-0 sudo[254233]: pam_unix(sudo:session): session closed for user root
Nov 29 06:45:15 compute-0 sudo[254258]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:45:15 compute-0 sudo[254258]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:45:15 compute-0 sudo[254258]: pam_unix(sudo:session): session closed for user root
Nov 29 06:45:15 compute-0 sudo[254283]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -- lvm list --format json
Nov 29 06:45:15 compute-0 sudo[254283]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:45:15 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:45:15 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:45:15 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:45:15.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:45:16 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:45:16 compute-0 podman[254349]: 2025-11-29 06:45:16.381941355 +0000 UTC m=+0.127543299 container create e6a61918c1cf6e78d8e8dbe2c200582ef64109318a4ef7bb1d2520e62980706b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_kapitsa, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 06:45:16 compute-0 podman[254349]: 2025-11-29 06:45:16.293022347 +0000 UTC m=+0.038624371 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:45:16 compute-0 systemd[1]: Started libpod-conmon-e6a61918c1cf6e78d8e8dbe2c200582ef64109318a4ef7bb1d2520e62980706b.scope.
Nov 29 06:45:16 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:45:16 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:45:16 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:45:16.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:45:16 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:45:16 compute-0 podman[254349]: 2025-11-29 06:45:16.492674819 +0000 UTC m=+0.238276813 container init e6a61918c1cf6e78d8e8dbe2c200582ef64109318a4ef7bb1d2520e62980706b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_kapitsa, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 06:45:16 compute-0 podman[254349]: 2025-11-29 06:45:16.500974063 +0000 UTC m=+0.246576017 container start e6a61918c1cf6e78d8e8dbe2c200582ef64109318a4ef7bb1d2520e62980706b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_kapitsa, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 06:45:16 compute-0 xenodochial_kapitsa[254365]: 167 167
Nov 29 06:45:16 compute-0 systemd[1]: libpod-e6a61918c1cf6e78d8e8dbe2c200582ef64109318a4ef7bb1d2520e62980706b.scope: Deactivated successfully.
Nov 29 06:45:16 compute-0 podman[254349]: 2025-11-29 06:45:16.50901264 +0000 UTC m=+0.254614674 container attach e6a61918c1cf6e78d8e8dbe2c200582ef64109318a4ef7bb1d2520e62980706b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_kapitsa, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 06:45:16 compute-0 podman[254349]: 2025-11-29 06:45:16.509429012 +0000 UTC m=+0.255030996 container died e6a61918c1cf6e78d8e8dbe2c200582ef64109318a4ef7bb1d2520e62980706b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_kapitsa, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 29 06:45:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-21b95d0c9f22dff2acf26e1cdb65afa81f9b8936d7dbf70852a8b07ece0fb4a3-merged.mount: Deactivated successfully.
Nov 29 06:45:16 compute-0 podman[254349]: 2025-11-29 06:45:16.559544876 +0000 UTC m=+0.305146820 container remove e6a61918c1cf6e78d8e8dbe2c200582ef64109318a4ef7bb1d2520e62980706b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_kapitsa, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 29 06:45:16 compute-0 systemd[1]: libpod-conmon-e6a61918c1cf6e78d8e8dbe2c200582ef64109318a4ef7bb1d2520e62980706b.scope: Deactivated successfully.
Nov 29 06:45:16 compute-0 podman[254390]: 2025-11-29 06:45:16.787465636 +0000 UTC m=+0.063095431 container create dd1af0c4c9462cf4e9cadc9b1b412e3d7cdc455b3cc3f95aab89dac67948c811 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_benz, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 06:45:16 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v985: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:45:16 compute-0 sshd-session[254203]: Received disconnect from 118.193.39.127 port 44752:11: Bye Bye [preauth]
Nov 29 06:45:16 compute-0 sshd-session[254203]: Disconnected from authenticating user root 118.193.39.127 port 44752 [preauth]
Nov 29 06:45:16 compute-0 podman[254390]: 2025-11-29 06:45:16.747145359 +0000 UTC m=+0.022775194 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:45:16 compute-0 systemd[1]: Started libpod-conmon-dd1af0c4c9462cf4e9cadc9b1b412e3d7cdc455b3cc3f95aab89dac67948c811.scope.
Nov 29 06:45:16 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:45:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f660420cc51c368e572e0ac78a44564ec5c5778e30a1a4cf638a9e32e12ce2ca/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 06:45:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f660420cc51c368e572e0ac78a44564ec5c5778e30a1a4cf638a9e32e12ce2ca/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:45:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f660420cc51c368e572e0ac78a44564ec5c5778e30a1a4cf638a9e32e12ce2ca/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:45:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f660420cc51c368e572e0ac78a44564ec5c5778e30a1a4cf638a9e32e12ce2ca/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 06:45:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:45:17.233 157767 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 06:45:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:45:17.234 157767 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 06:45:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:45:17.234 157767 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 06:45:17 compute-0 podman[254390]: 2025-11-29 06:45:17.247775312 +0000 UTC m=+0.523405147 container init dd1af0c4c9462cf4e9cadc9b1b412e3d7cdc455b3cc3f95aab89dac67948c811 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_benz, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 29 06:45:17 compute-0 podman[254390]: 2025-11-29 06:45:17.258100674 +0000 UTC m=+0.533730459 container start dd1af0c4c9462cf4e9cadc9b1b412e3d7cdc455b3cc3f95aab89dac67948c811 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_benz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 29 06:45:17 compute-0 podman[254390]: 2025-11-29 06:45:17.261995544 +0000 UTC m=+0.537625349 container attach dd1af0c4c9462cf4e9cadc9b1b412e3d7cdc455b3cc3f95aab89dac67948c811 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_benz, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 06:45:17 compute-0 sshd-session[254404]: Invalid user user5 from 176.109.67.96 port 47606
Nov 29 06:45:17 compute-0 sshd-session[254404]: Received disconnect from 176.109.67.96 port 47606:11: Bye Bye [preauth]
Nov 29 06:45:17 compute-0 sshd-session[254404]: Disconnected from invalid user user5 176.109.67.96 port 47606 [preauth]
Nov 29 06:45:17 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:45:17 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:45:17 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:45:17.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:45:18 compute-0 focused_benz[254408]: {
Nov 29 06:45:18 compute-0 focused_benz[254408]:     "1": [
Nov 29 06:45:18 compute-0 focused_benz[254408]:         {
Nov 29 06:45:18 compute-0 focused_benz[254408]:             "devices": [
Nov 29 06:45:18 compute-0 focused_benz[254408]:                 "/dev/loop3"
Nov 29 06:45:18 compute-0 focused_benz[254408]:             ],
Nov 29 06:45:18 compute-0 focused_benz[254408]:             "lv_name": "ceph_lv0",
Nov 29 06:45:18 compute-0 focused_benz[254408]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 06:45:18 compute-0 focused_benz[254408]:             "lv_size": "7511998464",
Nov 29 06:45:18 compute-0 focused_benz[254408]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=336ec58c-893b-528f-a0c1-6ed1196bc047,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=91f280f1-e534-4adc-bf70-98711580c2dd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 06:45:18 compute-0 focused_benz[254408]:             "lv_uuid": "G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP",
Nov 29 06:45:18 compute-0 focused_benz[254408]:             "name": "ceph_lv0",
Nov 29 06:45:18 compute-0 focused_benz[254408]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 06:45:18 compute-0 focused_benz[254408]:             "tags": {
Nov 29 06:45:18 compute-0 focused_benz[254408]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 06:45:18 compute-0 focused_benz[254408]:                 "ceph.block_uuid": "G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP",
Nov 29 06:45:18 compute-0 focused_benz[254408]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 06:45:18 compute-0 focused_benz[254408]:                 "ceph.cluster_fsid": "336ec58c-893b-528f-a0c1-6ed1196bc047",
Nov 29 06:45:18 compute-0 focused_benz[254408]:                 "ceph.cluster_name": "ceph",
Nov 29 06:45:18 compute-0 focused_benz[254408]:                 "ceph.crush_device_class": "",
Nov 29 06:45:18 compute-0 focused_benz[254408]:                 "ceph.encrypted": "0",
Nov 29 06:45:18 compute-0 focused_benz[254408]:                 "ceph.osd_fsid": "91f280f1-e534-4adc-bf70-98711580c2dd",
Nov 29 06:45:18 compute-0 focused_benz[254408]:                 "ceph.osd_id": "1",
Nov 29 06:45:18 compute-0 focused_benz[254408]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 06:45:18 compute-0 focused_benz[254408]:                 "ceph.type": "block",
Nov 29 06:45:18 compute-0 focused_benz[254408]:                 "ceph.vdo": "0"
Nov 29 06:45:18 compute-0 focused_benz[254408]:             },
Nov 29 06:45:18 compute-0 focused_benz[254408]:             "type": "block",
Nov 29 06:45:18 compute-0 focused_benz[254408]:             "vg_name": "ceph_vg0"
Nov 29 06:45:18 compute-0 focused_benz[254408]:         }
Nov 29 06:45:18 compute-0 focused_benz[254408]:     ]
Nov 29 06:45:18 compute-0 focused_benz[254408]: }
Nov 29 06:45:18 compute-0 systemd[1]: libpod-dd1af0c4c9462cf4e9cadc9b1b412e3d7cdc455b3cc3f95aab89dac67948c811.scope: Deactivated successfully.
Nov 29 06:45:18 compute-0 podman[254390]: 2025-11-29 06:45:18.040690184 +0000 UTC m=+1.316320019 container died dd1af0c4c9462cf4e9cadc9b1b412e3d7cdc455b3cc3f95aab89dac67948c811 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_benz, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 29 06:45:18 compute-0 ceph-mon[74654]: pgmap v985: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:45:18 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:45:18 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:45:18 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:45:18.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:45:18 compute-0 sshd-session[254428]: Invalid user cumulus from 103.143.238.173 port 39686
Nov 29 06:45:18 compute-0 sshd-session[254428]: Received disconnect from 103.143.238.173 port 39686:11: Bye Bye [preauth]
Nov 29 06:45:18 compute-0 sshd-session[254428]: Disconnected from invalid user cumulus 103.143.238.173 port 39686 [preauth]
Nov 29 06:45:18 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v986: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:45:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-f660420cc51c368e572e0ac78a44564ec5c5778e30a1a4cf638a9e32e12ce2ca-merged.mount: Deactivated successfully.
Nov 29 06:45:19 compute-0 ceph-mon[74654]: pgmap v986: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:45:19 compute-0 podman[254390]: 2025-11-29 06:45:19.805669448 +0000 UTC m=+3.081299283 container remove dd1af0c4c9462cf4e9cadc9b1b412e3d7cdc455b3cc3f95aab89dac67948c811 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_benz, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 06:45:19 compute-0 systemd[1]: libpod-conmon-dd1af0c4c9462cf4e9cadc9b1b412e3d7cdc455b3cc3f95aab89dac67948c811.scope: Deactivated successfully.
Nov 29 06:45:19 compute-0 sudo[254283]: pam_unix(sudo:session): session closed for user root
Nov 29 06:45:19 compute-0 sudo[254433]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:45:19 compute-0 sudo[254433]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:45:19 compute-0 sudo[254433]: pam_unix(sudo:session): session closed for user root
Nov 29 06:45:19 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:45:19 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:45:19 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:45:19.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:45:19 compute-0 sudo[254458]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:45:19 compute-0 sudo[254458]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:45:20 compute-0 sudo[254458]: pam_unix(sudo:session): session closed for user root
Nov 29 06:45:20 compute-0 sudo[254483]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:45:20 compute-0 sudo[254483]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:45:20 compute-0 sudo[254483]: pam_unix(sudo:session): session closed for user root
Nov 29 06:45:20 compute-0 sudo[254508]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -- raw list --format json
Nov 29 06:45:20 compute-0 sudo[254508]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:45:20 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:45:20 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:45:20 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:45:20.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:45:20 compute-0 podman[254573]: 2025-11-29 06:45:20.500288585 +0000 UTC m=+0.028252448 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:45:20 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v987: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:45:21 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:45:21 compute-0 podman[254573]: 2025-11-29 06:45:21.24017012 +0000 UTC m=+0.768133983 container create e5c47e38bc59d6cf57d02f6b7cbb97ba0a49739c76db35efd9b24c8a9bc5662d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_chebyshev, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 29 06:45:21 compute-0 ceph-mon[74654]: pgmap v987: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:45:21 compute-0 systemd[1]: Started libpod-conmon-e5c47e38bc59d6cf57d02f6b7cbb97ba0a49739c76db35efd9b24c8a9bc5662d.scope.
Nov 29 06:45:21 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:45:21 compute-0 podman[254573]: 2025-11-29 06:45:21.352591251 +0000 UTC m=+0.880555154 container init e5c47e38bc59d6cf57d02f6b7cbb97ba0a49739c76db35efd9b24c8a9bc5662d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_chebyshev, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 06:45:21 compute-0 podman[254573]: 2025-11-29 06:45:21.362416128 +0000 UTC m=+0.890379981 container start e5c47e38bc59d6cf57d02f6b7cbb97ba0a49739c76db35efd9b24c8a9bc5662d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_chebyshev, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 06:45:21 compute-0 podman[254573]: 2025-11-29 06:45:21.365487205 +0000 UTC m=+0.893451038 container attach e5c47e38bc59d6cf57d02f6b7cbb97ba0a49739c76db35efd9b24c8a9bc5662d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_chebyshev, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 29 06:45:21 compute-0 ecstatic_chebyshev[254592]: 167 167
Nov 29 06:45:21 compute-0 systemd[1]: libpod-e5c47e38bc59d6cf57d02f6b7cbb97ba0a49739c76db35efd9b24c8a9bc5662d.scope: Deactivated successfully.
Nov 29 06:45:21 compute-0 podman[254573]: 2025-11-29 06:45:21.369407416 +0000 UTC m=+0.897371269 container died e5c47e38bc59d6cf57d02f6b7cbb97ba0a49739c76db35efd9b24c8a9bc5662d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_chebyshev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 29 06:45:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-d76981e97f423d3f1d8da9f913d4f7b5aa7468b389343ce9710af3640c6be751-merged.mount: Deactivated successfully.
Nov 29 06:45:21 compute-0 podman[254573]: 2025-11-29 06:45:21.668332609 +0000 UTC m=+1.196296472 container remove e5c47e38bc59d6cf57d02f6b7cbb97ba0a49739c76db35efd9b24c8a9bc5662d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_chebyshev, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 29 06:45:21 compute-0 systemd[1]: libpod-conmon-e5c47e38bc59d6cf57d02f6b7cbb97ba0a49739c76db35efd9b24c8a9bc5662d.scope: Deactivated successfully.
Nov 29 06:45:21 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:45:21 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:45:21 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:45:21.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:45:21 compute-0 podman[254616]: 2025-11-29 06:45:21.955310156 +0000 UTC m=+0.065168520 container create ec589d30b382165726d2b43fc68a9428274f82f16892d7e69756561baa803d6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_blackwell, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 29 06:45:22 compute-0 podman[254616]: 2025-11-29 06:45:21.932305847 +0000 UTC m=+0.042164271 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:45:22 compute-0 systemd[1]: Started libpod-conmon-ec589d30b382165726d2b43fc68a9428274f82f16892d7e69756561baa803d6b.scope.
Nov 29 06:45:22 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:45:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2471f697d3de9085cc5a86192b61054dc3d5096b2aa696ec2da84f4c4f7af37a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 06:45:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2471f697d3de9085cc5a86192b61054dc3d5096b2aa696ec2da84f4c4f7af37a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:45:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2471f697d3de9085cc5a86192b61054dc3d5096b2aa696ec2da84f4c4f7af37a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:45:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2471f697d3de9085cc5a86192b61054dc3d5096b2aa696ec2da84f4c4f7af37a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 06:45:22 compute-0 podman[254616]: 2025-11-29 06:45:22.08944968 +0000 UTC m=+0.199308074 container init ec589d30b382165726d2b43fc68a9428274f82f16892d7e69756561baa803d6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_blackwell, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 29 06:45:22 compute-0 podman[254616]: 2025-11-29 06:45:22.095384528 +0000 UTC m=+0.205242912 container start ec589d30b382165726d2b43fc68a9428274f82f16892d7e69756561baa803d6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_blackwell, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 29 06:45:22 compute-0 podman[254616]: 2025-11-29 06:45:22.099146994 +0000 UTC m=+0.209005368 container attach ec589d30b382165726d2b43fc68a9428274f82f16892d7e69756561baa803d6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_blackwell, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 06:45:22 compute-0 sshd-session[254587]: Received disconnect from 34.92.81.41 port 44820:11: Bye Bye [preauth]
Nov 29 06:45:22 compute-0 sshd-session[254587]: Disconnected from authenticating user root 34.92.81.41 port 44820 [preauth]
Nov 29 06:45:22 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:45:22 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:45:22 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:45:22.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:45:22 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v988: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:45:22 compute-0 elated_blackwell[254632]: {
Nov 29 06:45:22 compute-0 elated_blackwell[254632]:     "91f280f1-e534-4adc-bf70-98711580c2dd": {
Nov 29 06:45:22 compute-0 elated_blackwell[254632]:         "ceph_fsid": "336ec58c-893b-528f-a0c1-6ed1196bc047",
Nov 29 06:45:22 compute-0 elated_blackwell[254632]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 06:45:22 compute-0 elated_blackwell[254632]:         "osd_id": 1,
Nov 29 06:45:22 compute-0 elated_blackwell[254632]:         "osd_uuid": "91f280f1-e534-4adc-bf70-98711580c2dd",
Nov 29 06:45:22 compute-0 elated_blackwell[254632]:         "type": "bluestore"
Nov 29 06:45:22 compute-0 elated_blackwell[254632]:     }
Nov 29 06:45:22 compute-0 elated_blackwell[254632]: }
Nov 29 06:45:23 compute-0 systemd[1]: libpod-ec589d30b382165726d2b43fc68a9428274f82f16892d7e69756561baa803d6b.scope: Deactivated successfully.
Nov 29 06:45:23 compute-0 podman[254616]: 2025-11-29 06:45:23.021205368 +0000 UTC m=+1.131063742 container died ec589d30b382165726d2b43fc68a9428274f82f16892d7e69756561baa803d6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_blackwell, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 06:45:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-2471f697d3de9085cc5a86192b61054dc3d5096b2aa696ec2da84f4c4f7af37a-merged.mount: Deactivated successfully.
Nov 29 06:45:23 compute-0 podman[254616]: 2025-11-29 06:45:23.079783351 +0000 UTC m=+1.189641715 container remove ec589d30b382165726d2b43fc68a9428274f82f16892d7e69756561baa803d6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_blackwell, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 29 06:45:23 compute-0 systemd[1]: libpod-conmon-ec589d30b382165726d2b43fc68a9428274f82f16892d7e69756561baa803d6b.scope: Deactivated successfully.
Nov 29 06:45:23 compute-0 sudo[254508]: pam_unix(sudo:session): session closed for user root
Nov 29 06:45:23 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 06:45:23 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:45:23 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 06:45:23 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:45:23 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev abb9ff00-127b-45cc-8b36-6b61b3792c77 does not exist
Nov 29 06:45:23 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev ea25c2c6-f5c6-4cf3-98f8-5d76c05de3c8 does not exist
Nov 29 06:45:23 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev cea4e130-8765-485c-955c-5ff983c32caf does not exist
Nov 29 06:45:23 compute-0 sudo[254667]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:45:23 compute-0 sudo[254667]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:45:23 compute-0 sudo[254667]: pam_unix(sudo:session): session closed for user root
Nov 29 06:45:23 compute-0 sudo[254692]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 06:45:23 compute-0 sudo[254692]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:45:23 compute-0 sudo[254692]: pam_unix(sudo:session): session closed for user root
Nov 29 06:45:23 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:45:23 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:45:23 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:45:23.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:45:24 compute-0 ceph-mon[74654]: pgmap v988: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:45:24 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:45:24 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:45:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:45:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:45:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:45:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:45:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:45:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:45:24 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:45:24 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:45:24 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:45:24.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:45:24 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v989: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:45:25 compute-0 ceph-mon[74654]: pgmap v989: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:45:25 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:45:25 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:45:25 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:45:25.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:45:26 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:45:26 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:45:26 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:45:26 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:45:26.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:45:26 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v990: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:45:27 compute-0 sshd-session[254718]: Invalid user max from 162.214.92.14 port 50596
Nov 29 06:45:27 compute-0 sshd-session[254718]: Received disconnect from 162.214.92.14 port 50596:11: Bye Bye [preauth]
Nov 29 06:45:27 compute-0 sshd-session[254718]: Disconnected from invalid user max 162.214.92.14 port 50596 [preauth]
Nov 29 06:45:27 compute-0 ceph-mon[74654]: pgmap v990: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:45:27 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:45:27 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:45:27 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:45:27.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:45:28 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:45:28 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:45:28 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:45:28.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:45:28 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v991: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:45:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 06:45:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 06:45:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 06:45:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 06:45:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 06:45:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 06:45:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 06:45:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 06:45:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 06:45:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 06:45:29 compute-0 ceph-mon[74654]: pgmap v991: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:45:29 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:45:29 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:45:29 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:45:29.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:45:30 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:45:30 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:45:30 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:45:30.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:45:30 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v992: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:45:31 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:45:31 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:45:31 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:45:31 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:45:31.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:45:32 compute-0 ceph-mon[74654]: pgmap v992: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:45:32 compute-0 podman[254723]: 2025-11-29 06:45:32.142566758 +0000 UTC m=+0.089321791 container health_status 843911ed0b6203707f0633a7e737420fbf54d55170a2d9cdc86db1752ff76af8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 29 06:45:32 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:45:32 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:45:32 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:45:32.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:45:32 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v993: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:45:32 compute-0 sudo[254743]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:45:32 compute-0 sudo[254743]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:45:32 compute-0 sudo[254743]: pam_unix(sudo:session): session closed for user root
Nov 29 06:45:32 compute-0 sudo[254768]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:45:32 compute-0 sudo[254768]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:45:32 compute-0 sudo[254768]: pam_unix(sudo:session): session closed for user root
Nov 29 06:45:33 compute-0 ceph-mon[74654]: pgmap v993: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:45:33 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:45:33 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:45:33 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:45:33.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:45:34 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:45:34 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:45:34 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:45:34.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:45:34 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v994: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:45:35 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:45:35 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:45:35 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:45:35.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:45:36 compute-0 ceph-mon[74654]: pgmap v994: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:45:36 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:45:36 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:45:36 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:45:36 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:45:36.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:45:36 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v995: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:45:37 compute-0 ceph-mon[74654]: pgmap v995: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:45:37 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:45:37 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:45:37 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:45:37.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:45:38 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:45:38 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:45:38 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:45:38.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:45:38 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v996: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:45:39 compute-0 ceph-mon[74654]: pgmap v996: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:45:39 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:45:39 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:45:39 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:45:39.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:45:40 compute-0 sshd[185364]: drop connection #0 from [45.78.221.93]:59596 on [38.102.83.22]:22 penalty: exceeded LoginGraceTime
Nov 29 06:45:40 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:45:40 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:45:40 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:45:40.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:45:40 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v997: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:45:41 compute-0 podman[254797]: 2025-11-29 06:45:41.109615644 +0000 UTC m=+0.066625811 container health_status 81ea2bcb89266a0110a379c2083d8cc042460d4a35c8ed3bf349dd1083925000 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2)
Nov 29 06:45:41 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:45:41 compute-0 ceph-mon[74654]: pgmap v997: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:45:41 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:45:41 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:45:41 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:45:41.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:45:42 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:45:42 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:45:42 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:45:42.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:45:42 compute-0 nova_compute[251877]: 2025-11-29 06:45:42.743 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 06:45:42 compute-0 nova_compute[251877]: 2025-11-29 06:45:42.744 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 06:45:42 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v998: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:45:43 compute-0 podman[254818]: 2025-11-29 06:45:43.186995332 +0000 UTC m=+0.151978379 container health_status b3f42e9a710907b47913576d27471d163da731262c1464357cff24681ce600c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible)
Nov 29 06:45:43 compute-0 ceph-mon[74654]: pgmap v998: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:45:43 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:45:43 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:45:43 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:45:43.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:45:44 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:45:44 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:45:44 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:45:44.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:45:44 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v999: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:45:45 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:45:45 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:45:45 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:45:45.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:45:45 compute-0 ceph-mon[74654]: pgmap v999: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:45:46 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:45:46 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:45:46 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:45:46 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:45:46.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:45:46 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1000: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:45:47 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:45:47 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:45:47 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:45:47.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:45:48 compute-0 ceph-mon[74654]: pgmap v1000: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:45:48 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:45:48 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:45:48 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:45:48.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:45:48 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1001: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:45:49 compute-0 ceph-mon[74654]: pgmap v1001: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:45:49 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:45:49 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:45:49 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:45:49.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:45:50 compute-0 nova_compute[251877]: 2025-11-29 06:45:50.203 251881 WARNING oslo.service.loopingcall [-] Function 'nova.servicegroup.drivers.db.DbDriver._report_state' run outlasted interval by 2.45 sec
Nov 29 06:45:50 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:45:50 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:45:50 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:45:50.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:45:50 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1002: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:45:51 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:45:51 compute-0 sshd-session[254848]: Received disconnect from 27.112.78.245 port 58186:11: Bye Bye [preauth]
Nov 29 06:45:51 compute-0 sshd-session[254848]: Disconnected from authenticating user root 27.112.78.245 port 58186 [preauth]
Nov 29 06:45:51 compute-0 ceph-mon[74654]: pgmap v1002: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:45:51 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:45:51 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:45:51 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:45:51.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:45:52 compute-0 sshd-session[254851]: Invalid user user from 193.163.72.91 port 48430
Nov 29 06:45:52 compute-0 sshd-session[254851]: Received disconnect from 193.163.72.91 port 48430:11: Bye Bye [preauth]
Nov 29 06:45:52 compute-0 sshd-session[254851]: Disconnected from invalid user user 193.163.72.91 port 48430 [preauth]
Nov 29 06:45:52 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:45:52 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:45:52 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:45:52.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:45:52 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1003: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:45:52 compute-0 nova_compute[251877]: 2025-11-29 06:45:52.887 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 06:45:52 compute-0 nova_compute[251877]: 2025-11-29 06:45:52.887 251881 DEBUG nova.compute.manager [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 06:45:52 compute-0 nova_compute[251877]: 2025-11-29 06:45:52.888 251881 DEBUG nova.compute.manager [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 06:45:53 compute-0 sudo[254853]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:45:53 compute-0 sudo[254853]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:45:53 compute-0 sudo[254853]: pam_unix(sudo:session): session closed for user root
Nov 29 06:45:53 compute-0 sudo[254879]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:45:53 compute-0 sudo[254879]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:45:53 compute-0 sudo[254879]: pam_unix(sudo:session): session closed for user root
Nov 29 06:45:53 compute-0 ceph-mon[74654]: pgmap v1003: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:45:54 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:45:54 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:45:54 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:45:54.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:45:54 compute-0 ceph-mgr[74948]: [balancer INFO root] Optimize plan auto_2025-11-29_06:45:54
Nov 29 06:45:54 compute-0 ceph-mgr[74948]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 06:45:54 compute-0 ceph-mgr[74948]: [balancer INFO root] do_upmap
Nov 29 06:45:54 compute-0 ceph-mgr[74948]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.data', 'default.rgw.log', 'cephfs.cephfs.meta', 'images', 'backups', 'volumes', '.mgr', 'default.rgw.control', '.rgw.root', 'default.rgw.meta']
Nov 29 06:45:54 compute-0 ceph-mgr[74948]: [balancer INFO root] prepared 0/10 changes
Nov 29 06:45:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:45:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:45:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:45:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:45:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:45:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:45:54 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:45:54 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:45:54 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:45:54.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:45:54 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1004: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:45:55 compute-0 sshd-session[254904]: Invalid user hadoop from 197.13.24.157 port 50950
Nov 29 06:45:55 compute-0 sshd-session[254904]: Received disconnect from 197.13.24.157 port 50950:11: Bye Bye [preauth]
Nov 29 06:45:55 compute-0 sshd-session[254904]: Disconnected from invalid user hadoop 197.13.24.157 port 50950 [preauth]
Nov 29 06:45:55 compute-0 nova_compute[251877]: 2025-11-29 06:45:55.563 251881 DEBUG nova.compute.manager [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 06:45:55 compute-0 nova_compute[251877]: 2025-11-29 06:45:55.563 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 06:45:55 compute-0 nova_compute[251877]: 2025-11-29 06:45:55.564 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 06:45:55 compute-0 nova_compute[251877]: 2025-11-29 06:45:55.564 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 06:45:55 compute-0 nova_compute[251877]: 2025-11-29 06:45:55.564 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 06:45:55 compute-0 nova_compute[251877]: 2025-11-29 06:45:55.565 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 06:45:55 compute-0 nova_compute[251877]: 2025-11-29 06:45:55.565 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 06:45:55 compute-0 nova_compute[251877]: 2025-11-29 06:45:55.565 251881 DEBUG nova.compute.manager [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 06:45:55 compute-0 nova_compute[251877]: 2025-11-29 06:45:55.565 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 06:45:55 compute-0 ceph-mon[74654]: pgmap v1004: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:45:56 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:45:56 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:45:56 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:45:56.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:45:56 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:45:56 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:45:56 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:45:56 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:45:56.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:45:56 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1005: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:45:57 compute-0 ceph-mon[74654]: pgmap v1005: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:45:58 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:45:58 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:45:58 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:45:58.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:45:58 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:45:58 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:45:58 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:45:58.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:45:58 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1006: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:45:59 compute-0 ceph-mon[74654]: pgmap v1006: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:46:00 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:46:00 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:46:00 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:46:00.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:46:00 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:46:00 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:46:00 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:46:00.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:46:00 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1007: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:46:01 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:46:02 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:46:02 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:46:02 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:46:02.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:46:02 compute-0 ceph-mon[74654]: pgmap v1007: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:46:02 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:46:02 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:46:02 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:46:02.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:46:02 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 06:46:02 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1002726251' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 06:46:02 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 06:46:02 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1002726251' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 06:46:02 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1008: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:46:03 compute-0 ceph-mon[74654]: from='client.? 192.168.122.10:0/1002726251' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 06:46:03 compute-0 ceph-mon[74654]: from='client.? 192.168.122.10:0/1002726251' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 06:46:03 compute-0 podman[254911]: 2025-11-29 06:46:03.113947472 +0000 UTC m=+0.074450221 container health_status 843911ed0b6203707f0633a7e737420fbf54d55170a2d9cdc86db1752ff76af8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 06:46:04 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:46:04 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:46:04 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:46:04.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:46:04 compute-0 ceph-mon[74654]: pgmap v1008: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:46:04 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:46:04 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:46:04 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:46:04.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:46:04 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1009: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:46:06 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:46:06 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:46:06 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:46:06.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:46:06 compute-0 ceph-mon[74654]: pgmap v1009: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:46:06 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:46:06 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:46:06 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:46:06 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:46:06.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:46:06 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1010: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:46:07 compute-0 ceph-mon[74654]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 06:46:07 compute-0 ceph-mon[74654]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.0 total, 600.0 interval
                                           Cumulative writes: 4082 writes, 18K keys, 4082 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s
                                           Cumulative WAL: 4082 writes, 4082 syncs, 1.00 writes per sync, written: 0.03 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1304 writes, 5980 keys, 1304 commit groups, 1.0 writes per commit group, ingest: 9.35 MB, 0.02 MB/s
                                           Interval WAL: 1304 writes, 1304 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     10.5      2.25              0.09         9    0.250       0      0       0.0       0.0
                                             L6      1/0    9.02 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.2     21.4     17.9      4.21              0.26         8    0.527     38K   4323       0.0       0.0
                                            Sum      1/0    9.02 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.2     14.0     15.4      6.47              0.35        17    0.380     38K   4323       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   5.5     18.6     18.6      3.13              0.23        10    0.313     25K   3033       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0     21.4     17.9      4.21              0.26         8    0.527     38K   4323       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     10.5      2.25              0.09         8    0.281       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     12.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1800.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.023, interval 0.010
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.10 GB write, 0.06 MB/s write, 0.09 GB read, 0.05 MB/s read, 6.5 seconds
                                           Interval compaction: 0.06 GB write, 0.10 MB/s write, 0.06 GB read, 0.10 MB/s read, 3.1 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e1a58311f0#2 capacity: 304.00 MB usage: 5.46 MB table_size: 0 occupancy: 18446744073709551615 collections: 4 last_copies: 0 last_secs: 8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(286,5.13 MB,1.68742%) FilterBlock(18,113.92 KB,0.036596%) IndexBlock(18,224.36 KB,0.0720727%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 29 06:46:08 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:46:08 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:46:08 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:46:08.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:46:08 compute-0 ceph-mon[74654]: pgmap v1010: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:46:08 compute-0 sshd-session[254935]: Invalid user mysql from 103.63.25.115 port 45810
Nov 29 06:46:08 compute-0 sshd-session[254935]: Received disconnect from 103.63.25.115 port 45810:11: Bye Bye [preauth]
Nov 29 06:46:08 compute-0 sshd-session[254935]: Disconnected from invalid user mysql 103.63.25.115 port 45810 [preauth]
Nov 29 06:46:08 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:46:08 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:46:08 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:46:08.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:46:08 compute-0 sshd-session[254908]: error: kex_exchange_identification: read: Connection timed out
Nov 29 06:46:08 compute-0 sshd-session[254908]: banner exchange: Connection from 58.210.98.130 port 56056: Connection timed out
Nov 29 06:46:08 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1011: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:46:09 compute-0 nova_compute[251877]: 2025-11-29 06:46:09.000 251881 DEBUG oslo_concurrency.lockutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 06:46:09 compute-0 nova_compute[251877]: 2025-11-29 06:46:09.000 251881 DEBUG oslo_concurrency.lockutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 06:46:09 compute-0 nova_compute[251877]: 2025-11-29 06:46:09.001 251881 DEBUG oslo_concurrency.lockutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 06:46:09 compute-0 nova_compute[251877]: 2025-11-29 06:46:09.001 251881 DEBUG nova.compute.resource_tracker [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 06:46:09 compute-0 nova_compute[251877]: 2025-11-29 06:46:09.001 251881 DEBUG oslo_concurrency.processutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 06:46:09 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 06:46:09 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3232813680' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 06:46:09 compute-0 nova_compute[251877]: 2025-11-29 06:46:09.433 251881 DEBUG oslo_concurrency.processutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 06:46:09 compute-0 nova_compute[251877]: 2025-11-29 06:46:09.622 251881 WARNING nova.virt.libvirt.driver [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 06:46:09 compute-0 nova_compute[251877]: 2025-11-29 06:46:09.624 251881 DEBUG nova.compute.resource_tracker [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5204MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 06:46:09 compute-0 nova_compute[251877]: 2025-11-29 06:46:09.624 251881 DEBUG oslo_concurrency.lockutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 06:46:09 compute-0 nova_compute[251877]: 2025-11-29 06:46:09.624 251881 DEBUG oslo_concurrency.lockutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 06:46:10 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:46:10 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:46:10 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:46:10.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:46:10 compute-0 ceph-mon[74654]: pgmap v1011: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:46:10 compute-0 ceph-mon[74654]: from='client.? 192.168.122.102:0/2861061763' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 06:46:10 compute-0 ceph-mon[74654]: from='client.? 192.168.122.100:0/3232813680' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 06:46:10 compute-0 ceph-mon[74654]: from='client.? 192.168.122.101:0/3671393982' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 06:46:10 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:46:10 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:46:10 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:46:10.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:46:10 compute-0 nova_compute[251877]: 2025-11-29 06:46:10.606 251881 DEBUG nova.compute.resource_tracker [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 06:46:10 compute-0 nova_compute[251877]: 2025-11-29 06:46:10.607 251881 DEBUG nova.compute.resource_tracker [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 06:46:10 compute-0 nova_compute[251877]: 2025-11-29 06:46:10.646 251881 DEBUG oslo_concurrency.processutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 06:46:10 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1012: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:46:11 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 06:46:11 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3071034240' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 06:46:11 compute-0 nova_compute[251877]: 2025-11-29 06:46:11.160 251881 DEBUG oslo_concurrency.processutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 06:46:11 compute-0 nova_compute[251877]: 2025-11-29 06:46:11.167 251881 DEBUG nova.compute.provider_tree [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Inventory has not changed in ProviderTree for provider: 36ed0248-8d04-4532-95bb-daab89f12202 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 06:46:11 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:46:11 compute-0 ceph-mon[74654]: from='client.? 192.168.122.102:0/616557685' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 06:46:12 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:46:12 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:46:12 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:46:12.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:46:12 compute-0 podman[254984]: 2025-11-29 06:46:12.120512774 +0000 UTC m=+0.082672843 container health_status 81ea2bcb89266a0110a379c2083d8cc042460d4a35c8ed3bf349dd1083925000 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Nov 29 06:46:12 compute-0 nova_compute[251877]: 2025-11-29 06:46:12.443 251881 DEBUG nova.scheduler.client.report [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Inventory has not changed for provider 36ed0248-8d04-4532-95bb-daab89f12202 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 06:46:12 compute-0 nova_compute[251877]: 2025-11-29 06:46:12.445 251881 DEBUG nova.compute.resource_tracker [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 06:46:12 compute-0 nova_compute[251877]: 2025-11-29 06:46:12.445 251881 DEBUG oslo_concurrency.lockutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.821s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 06:46:12 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:46:12 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:46:12 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:46:12.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:46:12 compute-0 ceph-mon[74654]: pgmap v1012: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:46:12 compute-0 ceph-mon[74654]: from='client.? 192.168.122.100:0/3071034240' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 06:46:12 compute-0 ceph-mon[74654]: from='client.? 192.168.122.101:0/1964139985' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 06:46:12 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1013: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:46:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 06:46:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:46:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 06:46:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:46:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:46:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:46:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:46:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:46:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:46:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:46:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:46:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:46:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 06:46:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:46:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:46:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:46:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Nov 29 06:46:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:46:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 06:46:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:46:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:46:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:46:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 06:46:13 compute-0 sudo[255005]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:46:13 compute-0 sudo[255005]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:46:13 compute-0 sudo[255005]: pam_unix(sudo:session): session closed for user root
Nov 29 06:46:13 compute-0 sudo[255039]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:46:13 compute-0 sudo[255039]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:46:13 compute-0 sudo[255039]: pam_unix(sudo:session): session closed for user root
Nov 29 06:46:13 compute-0 podman[255029]: 2025-11-29 06:46:13.448594362 +0000 UTC m=+0.149827727 container health_status b3f42e9a710907b47913576d27471d163da731262c1464357cff24681ce600c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller)
Nov 29 06:46:13 compute-0 ceph-mon[74654]: pgmap v1013: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:46:14 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:46:14 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:46:14 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:46:14.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:46:14 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:46:14 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:46:14 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:46:14.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:46:14 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1014: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:46:15 compute-0 ceph-mon[74654]: pgmap v1014: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:46:16 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:46:16 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:46:16 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:46:16.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:46:16 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:46:16 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:46:16 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:46:16 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:46:16.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:46:16 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1015: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:46:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:46:17.233 157767 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 06:46:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:46:17.235 157767 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 06:46:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:46:17.235 157767 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 06:46:17 compute-0 ceph-mon[74654]: pgmap v1015: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:46:18 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:46:18 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:46:18 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:46:18.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:46:18 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:46:18 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:46:18 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:46:18.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:46:18 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1016: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:46:19 compute-0 ceph-mon[74654]: pgmap v1016: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:46:20 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:46:20 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:46:20 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:46:20.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:46:20 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:46:20 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:46:20 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:46:20.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:46:20 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1017: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:46:21 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:46:21 compute-0 ceph-mon[74654]: pgmap v1017: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:46:22 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:46:22 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:46:22 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:46:22.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:46:22 compute-0 sshd-session[255087]: Invalid user erpnext from 49.247.35.31 port 64904
Nov 29 06:46:22 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:46:22 compute-0 sshd-session[255087]: Received disconnect from 49.247.35.31 port 64904:11: Bye Bye [preauth]
Nov 29 06:46:22 compute-0 sshd-session[255087]: Disconnected from invalid user erpnext 49.247.35.31 port 64904 [preauth]
Nov 29 06:46:22 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:46:22 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:46:22.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:46:22 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1018: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:46:23 compute-0 sudo[255092]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:46:23 compute-0 sudo[255092]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:46:23 compute-0 sudo[255092]: pam_unix(sudo:session): session closed for user root
Nov 29 06:46:23 compute-0 sshd-session[255090]: Received disconnect from 103.143.238.173 port 35604:11: Bye Bye [preauth]
Nov 29 06:46:23 compute-0 sshd-session[255090]: Disconnected from authenticating user root 103.143.238.173 port 35604 [preauth]
Nov 29 06:46:23 compute-0 sudo[255117]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:46:23 compute-0 sudo[255117]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:46:23 compute-0 sudo[255117]: pam_unix(sudo:session): session closed for user root
Nov 29 06:46:24 compute-0 sudo[255142]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:46:24 compute-0 sudo[255142]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:46:24 compute-0 sudo[255142]: pam_unix(sudo:session): session closed for user root
Nov 29 06:46:24 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:46:24 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:46:24 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:46:24.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:46:24 compute-0 ceph-mon[74654]: pgmap v1018: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:46:24 compute-0 sudo[255167]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 06:46:24 compute-0 sudo[255167]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:46:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:46:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:46:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:46:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:46:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:46:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:46:24 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:46:24 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:46:24 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:46:24.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:46:24 compute-0 sudo[255167]: pam_unix(sudo:session): session closed for user root
Nov 29 06:46:24 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 06:46:24 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:46:24 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 06:46:24 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 06:46:24 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 06:46:24 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:46:24 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev 3ea41388-392e-4b7e-8c96-b6653f447a4f does not exist
Nov 29 06:46:24 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev 555cccf9-1bb8-4f86-8eb7-2862c25730a1 does not exist
Nov 29 06:46:24 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1019: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:46:24 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev 02aa4779-7a08-4b30-aba3-deb52b14f8c1 does not exist
Nov 29 06:46:24 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 06:46:24 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 06:46:24 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 06:46:24 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 06:46:24 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 06:46:24 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:46:24 compute-0 sudo[255225]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:46:24 compute-0 sudo[255225]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:46:24 compute-0 sudo[255225]: pam_unix(sudo:session): session closed for user root
Nov 29 06:46:24 compute-0 sudo[255250]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:46:24 compute-0 sudo[255250]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:46:25 compute-0 sudo[255250]: pam_unix(sudo:session): session closed for user root
Nov 29 06:46:25 compute-0 sudo[255275]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:46:25 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:46:25 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 06:46:25 compute-0 sudo[255275]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:46:25 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:46:25 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 06:46:25 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 06:46:25 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:46:25 compute-0 sudo[255275]: pam_unix(sudo:session): session closed for user root
Nov 29 06:46:25 compute-0 sudo[255301]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Nov 29 06:46:25 compute-0 sudo[255301]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:46:25 compute-0 podman[255367]: 2025-11-29 06:46:25.611108281 +0000 UTC m=+0.072458816 container create 1a4b7d0cdbfce7d98b04d9706c492f17ecfa431be072bb5a3011287975a5a76e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_booth, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 06:46:25 compute-0 systemd[1]: Started libpod-conmon-1a4b7d0cdbfce7d98b04d9706c492f17ecfa431be072bb5a3011287975a5a76e.scope.
Nov 29 06:46:25 compute-0 podman[255367]: 2025-11-29 06:46:25.582455842 +0000 UTC m=+0.043806437 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:46:25 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:46:25 compute-0 podman[255367]: 2025-11-29 06:46:25.715798894 +0000 UTC m=+0.177149469 container init 1a4b7d0cdbfce7d98b04d9706c492f17ecfa431be072bb5a3011287975a5a76e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_booth, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 29 06:46:25 compute-0 podman[255367]: 2025-11-29 06:46:25.726611349 +0000 UTC m=+0.187961884 container start 1a4b7d0cdbfce7d98b04d9706c492f17ecfa431be072bb5a3011287975a5a76e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_booth, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 06:46:25 compute-0 podman[255367]: 2025-11-29 06:46:25.730745146 +0000 UTC m=+0.192095691 container attach 1a4b7d0cdbfce7d98b04d9706c492f17ecfa431be072bb5a3011287975a5a76e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_booth, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 06:46:25 compute-0 zen_booth[255384]: 167 167
Nov 29 06:46:25 compute-0 systemd[1]: libpod-1a4b7d0cdbfce7d98b04d9706c492f17ecfa431be072bb5a3011287975a5a76e.scope: Deactivated successfully.
Nov 29 06:46:25 compute-0 podman[255367]: 2025-11-29 06:46:25.736057586 +0000 UTC m=+0.197408121 container died 1a4b7d0cdbfce7d98b04d9706c492f17ecfa431be072bb5a3011287975a5a76e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_booth, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 06:46:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-b601d57143053447b7fa5949ed86bf6e47fbfafdcdb109a1f3659af8fc3cc26c-merged.mount: Deactivated successfully.
Nov 29 06:46:25 compute-0 podman[255367]: 2025-11-29 06:46:25.791238153 +0000 UTC m=+0.252588698 container remove 1a4b7d0cdbfce7d98b04d9706c492f17ecfa431be072bb5a3011287975a5a76e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_booth, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 29 06:46:25 compute-0 systemd[1]: libpod-conmon-1a4b7d0cdbfce7d98b04d9706c492f17ecfa431be072bb5a3011287975a5a76e.scope: Deactivated successfully.
Nov 29 06:46:25 compute-0 podman[255407]: 2025-11-29 06:46:25.971460337 +0000 UTC m=+0.047871371 container create 65f47fed100a1420cdb306429260bea7d384c89ce89e87061789ca6763df5313 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_chatterjee, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 06:46:26 compute-0 systemd[1]: Started libpod-conmon-65f47fed100a1420cdb306429260bea7d384c89ce89e87061789ca6763df5313.scope.
Nov 29 06:46:26 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:46:26 compute-0 podman[255407]: 2025-11-29 06:46:25.952705428 +0000 UTC m=+0.029116502 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:46:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f77c5392cea19296d36ce91b83c42896541f136e8983a66464aec4fd6a5259b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 06:46:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f77c5392cea19296d36ce91b83c42896541f136e8983a66464aec4fd6a5259b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:46:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f77c5392cea19296d36ce91b83c42896541f136e8983a66464aec4fd6a5259b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:46:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f77c5392cea19296d36ce91b83c42896541f136e8983a66464aec4fd6a5259b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 06:46:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f77c5392cea19296d36ce91b83c42896541f136e8983a66464aec4fd6a5259b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 06:46:26 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:46:26 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:46:26 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:46:26.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:46:26 compute-0 podman[255407]: 2025-11-29 06:46:26.074746261 +0000 UTC m=+0.151157355 container init 65f47fed100a1420cdb306429260bea7d384c89ce89e87061789ca6763df5313 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_chatterjee, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 29 06:46:26 compute-0 podman[255407]: 2025-11-29 06:46:26.086977186 +0000 UTC m=+0.163388250 container start 65f47fed100a1420cdb306429260bea7d384c89ce89e87061789ca6763df5313 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_chatterjee, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 29 06:46:26 compute-0 ceph-mon[74654]: pgmap v1019: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:46:26 compute-0 podman[255407]: 2025-11-29 06:46:26.091237497 +0000 UTC m=+0.167648571 container attach 65f47fed100a1420cdb306429260bea7d384c89ce89e87061789ca6763df5313 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_chatterjee, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS)
Nov 29 06:46:26 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:46:26 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:46:26 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:46:26.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:46:26 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:46:26 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1020: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:46:26 compute-0 infallible_chatterjee[255423]: --> passed data devices: 0 physical, 1 LVM
Nov 29 06:46:26 compute-0 infallible_chatterjee[255423]: --> relative data size: 1.0
Nov 29 06:46:26 compute-0 infallible_chatterjee[255423]: --> All data devices are unavailable
Nov 29 06:46:27 compute-0 systemd[1]: libpod-65f47fed100a1420cdb306429260bea7d384c89ce89e87061789ca6763df5313.scope: Deactivated successfully.
Nov 29 06:46:27 compute-0 podman[255438]: 2025-11-29 06:46:27.068958351 +0000 UTC m=+0.040864814 container died 65f47fed100a1420cdb306429260bea7d384c89ce89e87061789ca6763df5313 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_chatterjee, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 06:46:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-9f77c5392cea19296d36ce91b83c42896541f136e8983a66464aec4fd6a5259b-merged.mount: Deactivated successfully.
Nov 29 06:46:27 compute-0 podman[255438]: 2025-11-29 06:46:27.127961736 +0000 UTC m=+0.099868219 container remove 65f47fed100a1420cdb306429260bea7d384c89ce89e87061789ca6763df5313 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_chatterjee, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 06:46:27 compute-0 systemd[1]: libpod-conmon-65f47fed100a1420cdb306429260bea7d384c89ce89e87061789ca6763df5313.scope: Deactivated successfully.
Nov 29 06:46:27 compute-0 sudo[255301]: pam_unix(sudo:session): session closed for user root
Nov 29 06:46:27 compute-0 sudo[255454]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:46:27 compute-0 sudo[255454]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:46:27 compute-0 sudo[255454]: pam_unix(sudo:session): session closed for user root
Nov 29 06:46:27 compute-0 sudo[255479]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:46:27 compute-0 sudo[255479]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:46:27 compute-0 sudo[255479]: pam_unix(sudo:session): session closed for user root
Nov 29 06:46:27 compute-0 sudo[255504]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:46:27 compute-0 sudo[255504]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:46:27 compute-0 sudo[255504]: pam_unix(sudo:session): session closed for user root
Nov 29 06:46:27 compute-0 sudo[255529]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -- lvm list --format json
Nov 29 06:46:27 compute-0 sudo[255529]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:46:27 compute-0 podman[255594]: 2025-11-29 06:46:27.958077895 +0000 UTC m=+0.060020835 container create eb64693f596fcd6ceb0a1eb55bc4708899c2fa0feff1d0adf5ad322c8a9cc3b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_clarke, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 06:46:28 compute-0 systemd[1]: Started libpod-conmon-eb64693f596fcd6ceb0a1eb55bc4708899c2fa0feff1d0adf5ad322c8a9cc3b3.scope.
Nov 29 06:46:28 compute-0 podman[255594]: 2025-11-29 06:46:27.936619539 +0000 UTC m=+0.038562479 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:46:28 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:46:28 compute-0 podman[255594]: 2025-11-29 06:46:28.053897608 +0000 UTC m=+0.155840558 container init eb64693f596fcd6ceb0a1eb55bc4708899c2fa0feff1d0adf5ad322c8a9cc3b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_clarke, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 29 06:46:28 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:46:28 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:46:28 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:46:28.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:46:28 compute-0 podman[255594]: 2025-11-29 06:46:28.062351887 +0000 UTC m=+0.164294807 container start eb64693f596fcd6ceb0a1eb55bc4708899c2fa0feff1d0adf5ad322c8a9cc3b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_clarke, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 06:46:28 compute-0 podman[255594]: 2025-11-29 06:46:28.065677711 +0000 UTC m=+0.167620631 container attach eb64693f596fcd6ceb0a1eb55bc4708899c2fa0feff1d0adf5ad322c8a9cc3b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_clarke, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 29 06:46:28 compute-0 angry_clarke[255610]: 167 167
Nov 29 06:46:28 compute-0 systemd[1]: libpod-eb64693f596fcd6ceb0a1eb55bc4708899c2fa0feff1d0adf5ad322c8a9cc3b3.scope: Deactivated successfully.
Nov 29 06:46:28 compute-0 podman[255594]: 2025-11-29 06:46:28.068933272 +0000 UTC m=+0.170876202 container died eb64693f596fcd6ceb0a1eb55bc4708899c2fa0feff1d0adf5ad322c8a9cc3b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_clarke, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 06:46:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-f52d35eff8870e4b9e89bca1cff9776a03f32be242dcb7e283d6923ddaa14922-merged.mount: Deactivated successfully.
Nov 29 06:46:28 compute-0 ceph-mon[74654]: pgmap v1020: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:46:28 compute-0 podman[255594]: 2025-11-29 06:46:28.1099648 +0000 UTC m=+0.211907720 container remove eb64693f596fcd6ceb0a1eb55bc4708899c2fa0feff1d0adf5ad322c8a9cc3b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_clarke, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 29 06:46:28 compute-0 systemd[1]: libpod-conmon-eb64693f596fcd6ceb0a1eb55bc4708899c2fa0feff1d0adf5ad322c8a9cc3b3.scope: Deactivated successfully.
Nov 29 06:46:28 compute-0 podman[255636]: 2025-11-29 06:46:28.304792577 +0000 UTC m=+0.050666351 container create cccc088e77097ce96fb455f6bbd4285a43f363af182397397c616ff7ea91b482 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_fermat, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 29 06:46:28 compute-0 systemd[1]: Started libpod-conmon-cccc088e77097ce96fb455f6bbd4285a43f363af182397397c616ff7ea91b482.scope.
Nov 29 06:46:28 compute-0 podman[255636]: 2025-11-29 06:46:28.285107601 +0000 UTC m=+0.030981415 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:46:28 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:46:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c41cb2b820df26b4b4507c7e13a98ea2a7c8dc40d43899e89783ed39259c943/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 06:46:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c41cb2b820df26b4b4507c7e13a98ea2a7c8dc40d43899e89783ed39259c943/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:46:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c41cb2b820df26b4b4507c7e13a98ea2a7c8dc40d43899e89783ed39259c943/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:46:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c41cb2b820df26b4b4507c7e13a98ea2a7c8dc40d43899e89783ed39259c943/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 06:46:28 compute-0 podman[255636]: 2025-11-29 06:46:28.419079591 +0000 UTC m=+0.164953395 container init cccc088e77097ce96fb455f6bbd4285a43f363af182397397c616ff7ea91b482 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_fermat, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 06:46:28 compute-0 podman[255636]: 2025-11-29 06:46:28.429355601 +0000 UTC m=+0.175229425 container start cccc088e77097ce96fb455f6bbd4285a43f363af182397397c616ff7ea91b482 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_fermat, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 06:46:28 compute-0 podman[255636]: 2025-11-29 06:46:28.434432304 +0000 UTC m=+0.180306088 container attach cccc088e77097ce96fb455f6bbd4285a43f363af182397397c616ff7ea91b482 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_fermat, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 06:46:28 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:46:28 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:46:28 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:46:28.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:46:28 compute-0 sshd-session[255192]: Received disconnect from 101.47.163.116 port 41970:11: Bye Bye [preauth]
Nov 29 06:46:28 compute-0 sshd-session[255192]: Disconnected from authenticating user root 101.47.163.116 port 41970 [preauth]
Nov 29 06:46:28 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1021: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:46:29 compute-0 infallible_fermat[255654]: {
Nov 29 06:46:29 compute-0 infallible_fermat[255654]:     "1": [
Nov 29 06:46:29 compute-0 infallible_fermat[255654]:         {
Nov 29 06:46:29 compute-0 infallible_fermat[255654]:             "devices": [
Nov 29 06:46:29 compute-0 infallible_fermat[255654]:                 "/dev/loop3"
Nov 29 06:46:29 compute-0 infallible_fermat[255654]:             ],
Nov 29 06:46:29 compute-0 infallible_fermat[255654]:             "lv_name": "ceph_lv0",
Nov 29 06:46:29 compute-0 infallible_fermat[255654]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 06:46:29 compute-0 infallible_fermat[255654]:             "lv_size": "7511998464",
Nov 29 06:46:29 compute-0 infallible_fermat[255654]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=336ec58c-893b-528f-a0c1-6ed1196bc047,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=91f280f1-e534-4adc-bf70-98711580c2dd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 06:46:29 compute-0 infallible_fermat[255654]:             "lv_uuid": "G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP",
Nov 29 06:46:29 compute-0 infallible_fermat[255654]:             "name": "ceph_lv0",
Nov 29 06:46:29 compute-0 infallible_fermat[255654]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 06:46:29 compute-0 infallible_fermat[255654]:             "tags": {
Nov 29 06:46:29 compute-0 infallible_fermat[255654]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 06:46:29 compute-0 infallible_fermat[255654]:                 "ceph.block_uuid": "G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP",
Nov 29 06:46:29 compute-0 infallible_fermat[255654]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 06:46:29 compute-0 infallible_fermat[255654]:                 "ceph.cluster_fsid": "336ec58c-893b-528f-a0c1-6ed1196bc047",
Nov 29 06:46:29 compute-0 infallible_fermat[255654]:                 "ceph.cluster_name": "ceph",
Nov 29 06:46:29 compute-0 infallible_fermat[255654]:                 "ceph.crush_device_class": "",
Nov 29 06:46:29 compute-0 infallible_fermat[255654]:                 "ceph.encrypted": "0",
Nov 29 06:46:29 compute-0 infallible_fermat[255654]:                 "ceph.osd_fsid": "91f280f1-e534-4adc-bf70-98711580c2dd",
Nov 29 06:46:29 compute-0 infallible_fermat[255654]:                 "ceph.osd_id": "1",
Nov 29 06:46:29 compute-0 infallible_fermat[255654]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 06:46:29 compute-0 infallible_fermat[255654]:                 "ceph.type": "block",
Nov 29 06:46:29 compute-0 infallible_fermat[255654]:                 "ceph.vdo": "0"
Nov 29 06:46:29 compute-0 infallible_fermat[255654]:             },
Nov 29 06:46:29 compute-0 infallible_fermat[255654]:             "type": "block",
Nov 29 06:46:29 compute-0 infallible_fermat[255654]:             "vg_name": "ceph_vg0"
Nov 29 06:46:29 compute-0 infallible_fermat[255654]:         }
Nov 29 06:46:29 compute-0 infallible_fermat[255654]:     ]
Nov 29 06:46:29 compute-0 infallible_fermat[255654]: }
Nov 29 06:46:29 compute-0 systemd[1]: libpod-cccc088e77097ce96fb455f6bbd4285a43f363af182397397c616ff7ea91b482.scope: Deactivated successfully.
Nov 29 06:46:29 compute-0 podman[255636]: 2025-11-29 06:46:29.222685793 +0000 UTC m=+0.968559607 container died cccc088e77097ce96fb455f6bbd4285a43f363af182397397c616ff7ea91b482 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_fermat, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 06:46:29 compute-0 ceph-mon[74654]: pgmap v1021: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:46:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 06:46:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 06:46:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 06:46:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 06:46:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 06:46:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 06:46:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 06:46:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 06:46:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 06:46:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 06:46:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-1c41cb2b820df26b4b4507c7e13a98ea2a7c8dc40d43899e89783ed39259c943-merged.mount: Deactivated successfully.
Nov 29 06:46:30 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:46:30 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:46:30 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:46:30.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:46:30 compute-0 sshd-session[255659]: Invalid user user10 from 118.193.39.127 port 49200
Nov 29 06:46:30 compute-0 sshd-session[255659]: Received disconnect from 118.193.39.127 port 49200:11: Bye Bye [preauth]
Nov 29 06:46:30 compute-0 sshd-session[255659]: Disconnected from invalid user user10 118.193.39.127 port 49200 [preauth]
Nov 29 06:46:30 compute-0 podman[255636]: 2025-11-29 06:46:30.35112329 +0000 UTC m=+2.096997114 container remove cccc088e77097ce96fb455f6bbd4285a43f363af182397397c616ff7ea91b482 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_fermat, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef)
Nov 29 06:46:30 compute-0 sudo[255529]: pam_unix(sudo:session): session closed for user root
Nov 29 06:46:30 compute-0 systemd[1]: libpod-conmon-cccc088e77097ce96fb455f6bbd4285a43f363af182397397c616ff7ea91b482.scope: Deactivated successfully.
Nov 29 06:46:30 compute-0 sudo[255677]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:46:30 compute-0 sudo[255677]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:46:30 compute-0 sudo[255677]: pam_unix(sudo:session): session closed for user root
Nov 29 06:46:30 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:46:30 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:46:30 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:46:30.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:46:30 compute-0 sudo[255702]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:46:30 compute-0 sudo[255702]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:46:30 compute-0 sudo[255702]: pam_unix(sudo:session): session closed for user root
Nov 29 06:46:30 compute-0 sudo[255727]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:46:30 compute-0 sudo[255727]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:46:30 compute-0 sudo[255727]: pam_unix(sudo:session): session closed for user root
Nov 29 06:46:30 compute-0 sudo[255752]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -- raw list --format json
Nov 29 06:46:30 compute-0 sudo[255752]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:46:30 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1022: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:46:31 compute-0 podman[255817]: 2025-11-29 06:46:31.212106321 +0000 UTC m=+0.058476791 container create 788425a9b256e9a52b89a0551a3964b9969c18bf3e9559bad9a682ac2d01a4c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_yalow, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 29 06:46:31 compute-0 systemd[1]: Started libpod-conmon-788425a9b256e9a52b89a0551a3964b9969c18bf3e9559bad9a682ac2d01a4c9.scope.
Nov 29 06:46:31 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:46:31 compute-0 podman[255817]: 2025-11-29 06:46:31.193732402 +0000 UTC m=+0.040102902 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:46:31 compute-0 podman[255817]: 2025-11-29 06:46:31.298348783 +0000 UTC m=+0.144719243 container init 788425a9b256e9a52b89a0551a3964b9969c18bf3e9559bad9a682ac2d01a4c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_yalow, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 06:46:31 compute-0 podman[255817]: 2025-11-29 06:46:31.308216501 +0000 UTC m=+0.154587001 container start 788425a9b256e9a52b89a0551a3964b9969c18bf3e9559bad9a682ac2d01a4c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_yalow, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 29 06:46:31 compute-0 beautiful_yalow[255834]: 167 167
Nov 29 06:46:31 compute-0 podman[255817]: 2025-11-29 06:46:31.313289084 +0000 UTC m=+0.159659554 container attach 788425a9b256e9a52b89a0551a3964b9969c18bf3e9559bad9a682ac2d01a4c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_yalow, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 06:46:31 compute-0 systemd[1]: libpod-788425a9b256e9a52b89a0551a3964b9969c18bf3e9559bad9a682ac2d01a4c9.scope: Deactivated successfully.
Nov 29 06:46:31 compute-0 conmon[255834]: conmon 788425a9b256e9a52b89 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-788425a9b256e9a52b89a0551a3964b9969c18bf3e9559bad9a682ac2d01a4c9.scope/container/memory.events
Nov 29 06:46:31 compute-0 podman[255817]: 2025-11-29 06:46:31.315736723 +0000 UTC m=+0.162107183 container died 788425a9b256e9a52b89a0551a3964b9969c18bf3e9559bad9a682ac2d01a4c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_yalow, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 29 06:46:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-70aebb106368ddaccaec9817a78dc94ce505522dac47250b4f5a7cd5bd37d7ff-merged.mount: Deactivated successfully.
Nov 29 06:46:31 compute-0 podman[255817]: 2025-11-29 06:46:31.356988227 +0000 UTC m=+0.203358727 container remove 788425a9b256e9a52b89a0551a3964b9969c18bf3e9559bad9a682ac2d01a4c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_yalow, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 06:46:31 compute-0 systemd[1]: libpod-conmon-788425a9b256e9a52b89a0551a3964b9969c18bf3e9559bad9a682ac2d01a4c9.scope: Deactivated successfully.
Nov 29 06:46:31 compute-0 podman[255857]: 2025-11-29 06:46:31.617113706 +0000 UTC m=+0.098605523 container create bdfe017223f71d0efd0b8948599e3d2ebeaad93ee7ae5bc89e31e03fcea1367b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_euclid, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 29 06:46:31 compute-0 podman[255857]: 2025-11-29 06:46:31.553514462 +0000 UTC m=+0.035006299 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:46:31 compute-0 systemd[1]: Started libpod-conmon-bdfe017223f71d0efd0b8948599e3d2ebeaad93ee7ae5bc89e31e03fcea1367b.scope.
Nov 29 06:46:31 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:46:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9387ddc0a990aaef145e64d600ff9e30d02aa3d7e1c2eca2b5a8a7e8251a19a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 06:46:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9387ddc0a990aaef145e64d600ff9e30d02aa3d7e1c2eca2b5a8a7e8251a19a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:46:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9387ddc0a990aaef145e64d600ff9e30d02aa3d7e1c2eca2b5a8a7e8251a19a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:46:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9387ddc0a990aaef145e64d600ff9e30d02aa3d7e1c2eca2b5a8a7e8251a19a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 06:46:31 compute-0 podman[255857]: 2025-11-29 06:46:31.730925897 +0000 UTC m=+0.212417784 container init bdfe017223f71d0efd0b8948599e3d2ebeaad93ee7ae5bc89e31e03fcea1367b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_euclid, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 06:46:31 compute-0 podman[255857]: 2025-11-29 06:46:31.752236648 +0000 UTC m=+0.233728435 container start bdfe017223f71d0efd0b8948599e3d2ebeaad93ee7ae5bc89e31e03fcea1367b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_euclid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 06:46:31 compute-0 podman[255857]: 2025-11-29 06:46:31.756088187 +0000 UTC m=+0.237580094 container attach bdfe017223f71d0efd0b8948599e3d2ebeaad93ee7ae5bc89e31e03fcea1367b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_euclid, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 29 06:46:31 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:46:32 compute-0 ceph-mon[74654]: pgmap v1022: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:46:32 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:46:32 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:46:32 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:46:32.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:46:32 compute-0 festive_euclid[255874]: {
Nov 29 06:46:32 compute-0 festive_euclid[255874]:     "91f280f1-e534-4adc-bf70-98711580c2dd": {
Nov 29 06:46:32 compute-0 festive_euclid[255874]:         "ceph_fsid": "336ec58c-893b-528f-a0c1-6ed1196bc047",
Nov 29 06:46:32 compute-0 festive_euclid[255874]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 06:46:32 compute-0 festive_euclid[255874]:         "osd_id": 1,
Nov 29 06:46:32 compute-0 festive_euclid[255874]:         "osd_uuid": "91f280f1-e534-4adc-bf70-98711580c2dd",
Nov 29 06:46:32 compute-0 festive_euclid[255874]:         "type": "bluestore"
Nov 29 06:46:32 compute-0 festive_euclid[255874]:     }
Nov 29 06:46:32 compute-0 festive_euclid[255874]: }
Nov 29 06:46:32 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:46:32 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:46:32 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:46:32.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:46:32 compute-0 systemd[1]: libpod-bdfe017223f71d0efd0b8948599e3d2ebeaad93ee7ae5bc89e31e03fcea1367b.scope: Deactivated successfully.
Nov 29 06:46:32 compute-0 podman[255857]: 2025-11-29 06:46:32.57459819 +0000 UTC m=+1.056090007 container died bdfe017223f71d0efd0b8948599e3d2ebeaad93ee7ae5bc89e31e03fcea1367b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_euclid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 29 06:46:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-c9387ddc0a990aaef145e64d600ff9e30d02aa3d7e1c2eca2b5a8a7e8251a19a-merged.mount: Deactivated successfully.
Nov 29 06:46:32 compute-0 podman[255857]: 2025-11-29 06:46:32.737945278 +0000 UTC m=+1.219437075 container remove bdfe017223f71d0efd0b8948599e3d2ebeaad93ee7ae5bc89e31e03fcea1367b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_euclid, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2)
Nov 29 06:46:32 compute-0 systemd[1]: libpod-conmon-bdfe017223f71d0efd0b8948599e3d2ebeaad93ee7ae5bc89e31e03fcea1367b.scope: Deactivated successfully.
Nov 29 06:46:32 compute-0 sudo[255752]: pam_unix(sudo:session): session closed for user root
Nov 29 06:46:32 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 06:46:32 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:46:32 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 06:46:32 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:46:32 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev 1b02a72f-b706-47d4-a155-15802bec8228 does not exist
Nov 29 06:46:32 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev 8f79c2bc-8d6f-4030-ba06-f6db6a2aa498 does not exist
Nov 29 06:46:32 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev 2aec42c5-95a0-497a-86da-ffcae1880f77 does not exist
Nov 29 06:46:32 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1023: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:46:32 compute-0 sudo[255908]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:46:32 compute-0 sudo[255908]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:46:32 compute-0 sudo[255908]: pam_unix(sudo:session): session closed for user root
Nov 29 06:46:32 compute-0 sudo[255933]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 06:46:32 compute-0 sudo[255933]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:46:32 compute-0 sudo[255933]: pam_unix(sudo:session): session closed for user root
Nov 29 06:46:33 compute-0 sudo[255959]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:46:33 compute-0 sudo[255959]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:46:33 compute-0 sudo[255959]: pam_unix(sudo:session): session closed for user root
Nov 29 06:46:33 compute-0 podman[255983]: 2025-11-29 06:46:33.635141191 +0000 UTC m=+0.082384256 container health_status 843911ed0b6203707f0633a7e737420fbf54d55170a2d9cdc86db1752ff76af8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Nov 29 06:46:33 compute-0 sudo[255990]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:46:33 compute-0 sudo[255990]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:46:33 compute-0 sudo[255990]: pam_unix(sudo:session): session closed for user root
Nov 29 06:46:33 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:46:33 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:46:33 compute-0 ceph-mon[74654]: pgmap v1023: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:46:34 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:46:34 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:46:34 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:46:34.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:46:34 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:46:34 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:46:34 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:46:34.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:46:34 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1024: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:46:35 compute-0 sshd-session[256027]: Invalid user username from 162.214.92.14 port 49750
Nov 29 06:46:35 compute-0 sshd-session[256027]: Received disconnect from 162.214.92.14 port 49750:11: Bye Bye [preauth]
Nov 29 06:46:35 compute-0 sshd-session[256027]: Disconnected from invalid user username 162.214.92.14 port 49750 [preauth]
Nov 29 06:46:35 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:46:35.707 157767 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:05:03', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'aa:d2:09:dd:a5:e1'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 06:46:35 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:46:35.710 157767 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 06:46:35 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:46:35.712 157767 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=93db784b-4e42-404a-b548-49ad165fd917, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 06:46:36 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:46:36 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:46:36 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:46:36.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:46:36 compute-0 ceph-mon[74654]: pgmap v1024: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:46:36 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:46:36 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:46:36 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:46:36.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:46:36 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:46:36 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1025: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:46:37 compute-0 ceph-mon[74654]: pgmap v1025: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:46:38 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:46:38 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:46:38 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:46:38.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:46:38 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:46:38 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:46:38 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:46:38.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:46:38 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1026: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:46:40 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:46:40 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:46:40 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:46:40.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:46:40 compute-0 ceph-mon[74654]: pgmap v1026: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:46:40 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:46:40 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:46:40 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:46:40.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:46:40 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1027: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:46:41 compute-0 sshd-session[256032]: Received disconnect from 176.109.67.96 port 60390:11: Bye Bye [preauth]
Nov 29 06:46:41 compute-0 sshd-session[256032]: Disconnected from authenticating user root 176.109.67.96 port 60390 [preauth]
Nov 29 06:46:41 compute-0 ceph-mon[74654]: pgmap v1027: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:46:41 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:46:42 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:46:42 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:46:42 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:46:42.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:46:42 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:46:42 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:46:42 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:46:42.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:46:42 compute-0 sshd-session[256035]: Received disconnect from 34.92.81.41 port 47202:11: Bye Bye [preauth]
Nov 29 06:46:42 compute-0 sshd-session[256035]: Disconnected from authenticating user root 34.92.81.41 port 47202 [preauth]
Nov 29 06:46:42 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1028: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:46:43 compute-0 podman[256037]: 2025-11-29 06:46:43.130994164 +0000 UTC m=+0.092159991 container health_status 81ea2bcb89266a0110a379c2083d8cc042460d4a35c8ed3bf349dd1083925000 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Nov 29 06:46:43 compute-0 ceph-mon[74654]: pgmap v1028: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:46:44 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:46:44 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:46:44 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:46:44.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:46:44 compute-0 podman[256060]: 2025-11-29 06:46:44.199678975 +0000 UTC m=+0.147202344 container health_status b3f42e9a710907b47913576d27471d163da731262c1464357cff24681ce600c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_managed=true, config_id=ovn_controller)
Nov 29 06:46:44 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:46:44 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:46:44 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:46:44.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:46:44 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1029: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:46:46 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:46:46 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:46:46 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:46:46.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:46:46 compute-0 ceph-mon[74654]: pgmap v1029: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:46:46 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:46:46 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:46:46 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:46:46.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:46:46 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:46:46 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1030: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:46:47 compute-0 ceph-mon[74654]: pgmap v1030: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:46:48 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:46:48 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:46:48 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:46:48.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:46:48 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:46:48 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:46:48 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:46:48.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:46:48 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1031: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:46:50 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:46:50 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:46:50 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:46:50.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:46:50 compute-0 ceph-mon[74654]: pgmap v1031: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:46:50 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:46:50 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:46:50 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:46:50.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:46:50 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1032: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:46:51 compute-0 ceph-mon[74654]: pgmap v1032: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:46:51 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:46:52 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:46:52 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:46:52 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:46:52.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:46:52 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:46:52 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:46:52 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:46:52.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:46:52 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1033: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:46:53 compute-0 sudo[256091]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:46:53 compute-0 sudo[256091]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:46:53 compute-0 sudo[256091]: pam_unix(sudo:session): session closed for user root
Nov 29 06:46:53 compute-0 sudo[256116]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:46:53 compute-0 sudo[256116]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:46:53 compute-0 sudo[256116]: pam_unix(sudo:session): session closed for user root
Nov 29 06:46:54 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:46:54 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:46:54 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:46:54.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:46:54 compute-0 ceph-mon[74654]: pgmap v1033: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:46:54 compute-0 ceph-mgr[74948]: [balancer INFO root] Optimize plan auto_2025-11-29_06:46:54
Nov 29 06:46:54 compute-0 ceph-mgr[74948]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 06:46:54 compute-0 ceph-mgr[74948]: [balancer INFO root] do_upmap
Nov 29 06:46:54 compute-0 ceph-mgr[74948]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.meta', 'backups', '.mgr', 'default.rgw.control', 'vms', 'default.rgw.log', 'volumes', 'images', 'cephfs.cephfs.data', 'default.rgw.meta']
Nov 29 06:46:54 compute-0 ceph-mgr[74948]: [balancer INFO root] prepared 0/10 changes
Nov 29 06:46:54 compute-0 sshd-session[256059]: error: kex_exchange_identification: read: Connection timed out
Nov 29 06:46:54 compute-0 sshd-session[256059]: banner exchange: Connection from 58.210.98.130 port 1437: Connection timed out
Nov 29 06:46:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:46:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:46:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:46:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:46:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:46:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:46:54 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:46:54 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:46:54 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:46:54.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:46:54 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1034: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:46:55 compute-0 ceph-mon[74654]: pgmap v1034: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:46:56 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:46:56 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:46:56 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:46:56.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:46:56 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:46:56 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:46:56 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:46:56.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:46:56 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1035: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:46:57 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:46:58 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:46:58 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:46:58 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:46:58.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:46:58 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:46:58 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:46:58 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:46:58.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:46:58 compute-0 ceph-mon[74654]: pgmap v1035: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:46:58 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1036: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:46:59 compute-0 ceph-mon[74654]: pgmap v1036: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:47:00 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:47:00 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:47:00 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:47:00.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:47:00 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:47:00 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:47:00 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:47:00.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:47:00 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1037: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:47:02 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:47:02 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:47:02 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:47:02 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:47:02.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:47:02 compute-0 ceph-mon[74654]: pgmap v1037: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:47:02 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 06:47:02 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3066267713' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 06:47:02 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 06:47:02 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3066267713' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 06:47:02 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:47:02 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:47:02 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:47:02.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:47:02 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1038: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:47:03 compute-0 ceph-mon[74654]: from='client.? 192.168.122.10:0/3066267713' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 06:47:03 compute-0 ceph-mon[74654]: from='client.? 192.168.122.10:0/3066267713' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 06:47:03 compute-0 ceph-mon[74654]: pgmap v1038: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:47:04 compute-0 podman[256146]: 2025-11-29 06:47:04.110562684 +0000 UTC m=+0.072929218 container health_status 843911ed0b6203707f0633a7e737420fbf54d55170a2d9cdc86db1752ff76af8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd)
Nov 29 06:47:04 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:47:04 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:47:04 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:47:04.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:47:04 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:47:04 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:47:04 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:47:04.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:47:04 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1039: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:47:06 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:47:06 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:47:06 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:47:06.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:47:06 compute-0 ceph-mon[74654]: pgmap v1039: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:47:06 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:47:06 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:47:06 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:47:06.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:47:06 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1040: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:47:07 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:47:07 compute-0 ceph-mon[74654]: pgmap v1040: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:47:08 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:47:08 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:47:08 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:47:08.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:47:08 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:47:08 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:47:08 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:47:08.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:47:08 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1041: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:47:09 compute-0 sshd-session[256169]: Invalid user packer from 197.13.24.157 port 59950
Nov 29 06:47:09 compute-0 sshd-session[256169]: Received disconnect from 197.13.24.157 port 59950:11: Bye Bye [preauth]
Nov 29 06:47:09 compute-0 sshd-session[256169]: Disconnected from invalid user packer 197.13.24.157 port 59950 [preauth]
Nov 29 06:47:09 compute-0 ceph-mon[74654]: pgmap v1041: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:47:10 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:47:10 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:47:10 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:47:10.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:47:10 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:47:10 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:47:10 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:47:10.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:47:10 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1042: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:47:11 compute-0 ceph-mon[74654]: pgmap v1042: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:47:11 compute-0 sshd-session[256172]: Invalid user rahul from 193.163.72.91 port 54514
Nov 29 06:47:11 compute-0 sshd-session[256172]: Received disconnect from 193.163.72.91 port 54514:11: Bye Bye [preauth]
Nov 29 06:47:11 compute-0 sshd-session[256172]: Disconnected from invalid user rahul 193.163.72.91 port 54514 [preauth]
Nov 29 06:47:12 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:47:12 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:47:12 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:47:12 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:47:12.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:47:12 compute-0 nova_compute[251877]: 2025-11-29 06:47:12.448 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 06:47:12 compute-0 nova_compute[251877]: 2025-11-29 06:47:12.449 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 06:47:12 compute-0 nova_compute[251877]: 2025-11-29 06:47:12.449 251881 DEBUG nova.compute.manager [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 06:47:12 compute-0 nova_compute[251877]: 2025-11-29 06:47:12.449 251881 DEBUG nova.compute.manager [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 06:47:12 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:47:12 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:47:12 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:47:12.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:47:12 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1043: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:47:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 06:47:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:47:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 06:47:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:47:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:47:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:47:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:47:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:47:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:47:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:47:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:47:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:47:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 06:47:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:47:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:47:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:47:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Nov 29 06:47:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:47:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 06:47:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:47:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:47:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:47:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 06:47:13 compute-0 nova_compute[251877]: 2025-11-29 06:47:13.716 251881 DEBUG nova.compute.manager [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 06:47:13 compute-0 nova_compute[251877]: 2025-11-29 06:47:13.717 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 06:47:13 compute-0 nova_compute[251877]: 2025-11-29 06:47:13.717 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 06:47:13 compute-0 nova_compute[251877]: 2025-11-29 06:47:13.718 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 06:47:13 compute-0 nova_compute[251877]: 2025-11-29 06:47:13.718 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 06:47:13 compute-0 nova_compute[251877]: 2025-11-29 06:47:13.718 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 06:47:13 compute-0 nova_compute[251877]: 2025-11-29 06:47:13.719 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 06:47:13 compute-0 nova_compute[251877]: 2025-11-29 06:47:13.719 251881 DEBUG nova.compute.manager [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 06:47:13 compute-0 nova_compute[251877]: 2025-11-29 06:47:13.720 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 06:47:13 compute-0 sudo[256176]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:47:13 compute-0 sudo[256176]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:47:13 compute-0 sudo[256176]: pam_unix(sudo:session): session closed for user root
Nov 29 06:47:14 compute-0 ceph-mon[74654]: pgmap v1043: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:47:14 compute-0 sudo[256207]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:47:14 compute-0 sudo[256207]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:47:14 compute-0 nova_compute[251877]: 2025-11-29 06:47:14.065 251881 DEBUG oslo_concurrency.lockutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 06:47:14 compute-0 nova_compute[251877]: 2025-11-29 06:47:14.065 251881 DEBUG oslo_concurrency.lockutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 06:47:14 compute-0 nova_compute[251877]: 2025-11-29 06:47:14.065 251881 DEBUG oslo_concurrency.lockutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 06:47:14 compute-0 nova_compute[251877]: 2025-11-29 06:47:14.065 251881 DEBUG nova.compute.resource_tracker [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 06:47:14 compute-0 nova_compute[251877]: 2025-11-29 06:47:14.066 251881 DEBUG oslo_concurrency.processutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 06:47:14 compute-0 sudo[256207]: pam_unix(sudo:session): session closed for user root
Nov 29 06:47:14 compute-0 podman[256200]: 2025-11-29 06:47:14.089183767 +0000 UTC m=+0.117916268 container health_status 81ea2bcb89266a0110a379c2083d8cc042460d4a35c8ed3bf349dd1083925000 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent)
Nov 29 06:47:14 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:47:14 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:47:14 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:47:14.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:47:14 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 06:47:14 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1028868458' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 06:47:14 compute-0 nova_compute[251877]: 2025-11-29 06:47:14.521 251881 DEBUG oslo_concurrency.processutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 06:47:14 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:47:14 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:47:14 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:47:14.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:47:14 compute-0 nova_compute[251877]: 2025-11-29 06:47:14.767 251881 WARNING nova.virt.libvirt.driver [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 06:47:14 compute-0 nova_compute[251877]: 2025-11-29 06:47:14.769 251881 DEBUG nova.compute.resource_tracker [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5221MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 06:47:14 compute-0 nova_compute[251877]: 2025-11-29 06:47:14.769 251881 DEBUG oslo_concurrency.lockutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 06:47:14 compute-0 nova_compute[251877]: 2025-11-29 06:47:14.770 251881 DEBUG oslo_concurrency.lockutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 06:47:14 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1044: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:47:15 compute-0 ceph-mon[74654]: from='client.? 192.168.122.100:0/1028868458' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 06:47:15 compute-0 ceph-mon[74654]: from='client.? 192.168.122.101:0/2186349468' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 06:47:15 compute-0 ceph-mon[74654]: from='client.? 192.168.122.102:0/3335706329' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 06:47:15 compute-0 podman[256267]: 2025-11-29 06:47:15.145816857 +0000 UTC m=+0.108416050 container health_status b3f42e9a710907b47913576d27471d163da731262c1464357cff24681ce600c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 06:47:15 compute-0 nova_compute[251877]: 2025-11-29 06:47:15.956 251881 DEBUG nova.compute.resource_tracker [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 06:47:15 compute-0 nova_compute[251877]: 2025-11-29 06:47:15.957 251881 DEBUG nova.compute.resource_tracker [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 06:47:15 compute-0 nova_compute[251877]: 2025-11-29 06:47:15.991 251881 DEBUG oslo_concurrency.processutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 06:47:16 compute-0 ceph-mon[74654]: pgmap v1044: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:47:16 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:47:16 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:47:16 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:47:16.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:47:16 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 06:47:16 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1847130721' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 06:47:16 compute-0 nova_compute[251877]: 2025-11-29 06:47:16.456 251881 DEBUG oslo_concurrency.processutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 06:47:16 compute-0 nova_compute[251877]: 2025-11-29 06:47:16.466 251881 DEBUG nova.compute.provider_tree [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Inventory has not changed in ProviderTree for provider: 36ed0248-8d04-4532-95bb-daab89f12202 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 06:47:16 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:47:16 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:47:16 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:47:16.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:47:16 compute-0 nova_compute[251877]: 2025-11-29 06:47:16.814 251881 DEBUG nova.scheduler.client.report [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Inventory has not changed for provider 36ed0248-8d04-4532-95bb-daab89f12202 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 06:47:16 compute-0 nova_compute[251877]: 2025-11-29 06:47:16.817 251881 DEBUG nova.compute.resource_tracker [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 06:47:16 compute-0 nova_compute[251877]: 2025-11-29 06:47:16.817 251881 DEBUG oslo_concurrency.lockutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.047s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 06:47:16 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1045: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:47:17 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:47:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:47:17.234 157767 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 06:47:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:47:17.235 157767 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 06:47:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:47:17.235 157767 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 06:47:17 compute-0 ceph-mon[74654]: from='client.? 192.168.122.101:0/1939421518' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 06:47:17 compute-0 ceph-mon[74654]: from='client.? 192.168.122.102:0/209762820' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 06:47:17 compute-0 ceph-mon[74654]: from='client.? 192.168.122.100:0/1847130721' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 06:47:18 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:47:18 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:47:18 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:47:18.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:47:18 compute-0 ceph-mon[74654]: pgmap v1045: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:47:18 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:47:18 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:47:18 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:47:18.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:47:18 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1046: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:47:19 compute-0 ceph-mon[74654]: from='client.? 192.168.122.102:0/687770592' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 06:47:19 compute-0 ceph-mon[74654]: from='client.? 192.168.122.101:0/1602380716' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 06:47:19 compute-0 ceph-mon[74654]: pgmap v1046: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:47:20 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:47:20 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:47:20 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:47:20.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:47:20 compute-0 sshd-session[256317]: Received disconnect from 103.31.39.143 port 47802:11: Bye Bye [preauth]
Nov 29 06:47:20 compute-0 sshd-session[256317]: Disconnected from authenticating user root 103.31.39.143 port 47802 [preauth]
Nov 29 06:47:20 compute-0 ceph-mon[74654]: from='client.? 192.168.122.102:0/728103463' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 06:47:20 compute-0 ceph-mon[74654]: from='client.? 192.168.122.101:0/2414150286' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 06:47:20 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:47:20 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:47:20 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:47:20.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:47:20 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1047: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:47:21 compute-0 nova_compute[251877]: 2025-11-29 06:47:21.322 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 06:47:21 compute-0 nova_compute[251877]: 2025-11-29 06:47:21.323 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 06:47:21 compute-0 nova_compute[251877]: 2025-11-29 06:47:21.549 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 06:47:21 compute-0 nova_compute[251877]: 2025-11-29 06:47:21.550 251881 DEBUG nova.compute.manager [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 06:47:21 compute-0 nova_compute[251877]: 2025-11-29 06:47:21.550 251881 DEBUG nova.compute.manager [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 06:47:21 compute-0 ceph-mon[74654]: pgmap v1047: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:47:21 compute-0 nova_compute[251877]: 2025-11-29 06:47:21.767 251881 DEBUG nova.compute.manager [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 06:47:21 compute-0 nova_compute[251877]: 2025-11-29 06:47:21.769 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 06:47:21 compute-0 nova_compute[251877]: 2025-11-29 06:47:21.769 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 06:47:21 compute-0 nova_compute[251877]: 2025-11-29 06:47:21.769 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 06:47:21 compute-0 nova_compute[251877]: 2025-11-29 06:47:21.769 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 06:47:21 compute-0 nova_compute[251877]: 2025-11-29 06:47:21.769 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 06:47:21 compute-0 nova_compute[251877]: 2025-11-29 06:47:21.769 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 06:47:21 compute-0 nova_compute[251877]: 2025-11-29 06:47:21.770 251881 DEBUG nova.compute.manager [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 06:47:21 compute-0 nova_compute[251877]: 2025-11-29 06:47:21.770 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 06:47:21 compute-0 nova_compute[251877]: 2025-11-29 06:47:21.876 251881 DEBUG oslo_concurrency.lockutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 06:47:21 compute-0 nova_compute[251877]: 2025-11-29 06:47:21.877 251881 DEBUG oslo_concurrency.lockutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 06:47:21 compute-0 nova_compute[251877]: 2025-11-29 06:47:21.878 251881 DEBUG oslo_concurrency.lockutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 06:47:21 compute-0 nova_compute[251877]: 2025-11-29 06:47:21.878 251881 DEBUG nova.compute.resource_tracker [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 06:47:21 compute-0 nova_compute[251877]: 2025-11-29 06:47:21.879 251881 DEBUG oslo_concurrency.processutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 06:47:22 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:47:22 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:47:22 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:47:22 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:47:22.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:47:22 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 06:47:22 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2547278201' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 06:47:22 compute-0 nova_compute[251877]: 2025-11-29 06:47:22.373 251881 DEBUG oslo_concurrency.processutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 06:47:22 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:47:22 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:47:22 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:47:22.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:47:22 compute-0 nova_compute[251877]: 2025-11-29 06:47:22.622 251881 WARNING nova.virt.libvirt.driver [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 06:47:22 compute-0 nova_compute[251877]: 2025-11-29 06:47:22.624 251881 DEBUG nova.compute.resource_tracker [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5208MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 06:47:22 compute-0 nova_compute[251877]: 2025-11-29 06:47:22.624 251881 DEBUG oslo_concurrency.lockutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 06:47:22 compute-0 nova_compute[251877]: 2025-11-29 06:47:22.625 251881 DEBUG oslo_concurrency.lockutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 06:47:22 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1048: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:47:22 compute-0 ceph-mon[74654]: from='client.? 192.168.122.100:0/2547278201' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 06:47:24 compute-0 ceph-mon[74654]: pgmap v1048: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:47:24 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:47:24 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:47:24 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:47:24.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:47:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:47:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:47:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:47:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:47:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:47:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:47:24 compute-0 nova_compute[251877]: 2025-11-29 06:47:24.319 251881 DEBUG nova.compute.resource_tracker [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 06:47:24 compute-0 nova_compute[251877]: 2025-11-29 06:47:24.320 251881 DEBUG nova.compute.resource_tracker [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 06:47:24 compute-0 nova_compute[251877]: 2025-11-29 06:47:24.366 251881 DEBUG oslo_concurrency.processutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 06:47:24 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:47:24 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:47:24 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:47:24.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:47:24 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 06:47:24 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3841899284' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 06:47:24 compute-0 nova_compute[251877]: 2025-11-29 06:47:24.845 251881 DEBUG oslo_concurrency.processutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 06:47:24 compute-0 nova_compute[251877]: 2025-11-29 06:47:24.855 251881 DEBUG nova.compute.provider_tree [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Inventory has not changed in ProviderTree for provider: 36ed0248-8d04-4532-95bb-daab89f12202 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 06:47:24 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1049: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:47:25 compute-0 ceph-mon[74654]: from='client.? 192.168.122.100:0/3841899284' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 06:47:25 compute-0 nova_compute[251877]: 2025-11-29 06:47:25.751 251881 DEBUG nova.scheduler.client.report [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Inventory has not changed for provider 36ed0248-8d04-4532-95bb-daab89f12202 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 06:47:25 compute-0 nova_compute[251877]: 2025-11-29 06:47:25.753 251881 DEBUG nova.compute.resource_tracker [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 06:47:25 compute-0 nova_compute[251877]: 2025-11-29 06:47:25.753 251881 DEBUG oslo_concurrency.lockutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 3.128s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 06:47:26 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:47:26 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:47:26 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:47:26.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:47:26 compute-0 ceph-mon[74654]: pgmap v1049: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:47:26 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:47:26 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:47:26 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:47:26.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:47:26 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1050: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:47:27 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:47:27 compute-0 ceph-mon[74654]: pgmap v1050: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:47:28 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:47:28 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:47:28 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:47:28.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:47:28 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:47:28 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:47:28 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:47:28.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:47:28 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1051: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:47:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 06:47:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 06:47:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 06:47:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 06:47:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 06:47:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 06:47:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 06:47:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 06:47:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 06:47:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 06:47:30 compute-0 ceph-mon[74654]: pgmap v1051: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:47:30 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:47:30 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:47:30 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:47:30.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:47:30 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:47:30 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:47:30 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:47:30.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:47:30 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1052: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:47:32 compute-0 ceph-mon[74654]: pgmap v1052: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:47:32 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:47:32 compute-0 sshd-session[256370]: Received disconnect from 103.143.238.173 port 53602:11: Bye Bye [preauth]
Nov 29 06:47:32 compute-0 sshd-session[256370]: Disconnected from authenticating user root 103.143.238.173 port 53602 [preauth]
Nov 29 06:47:32 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:47:32 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:47:32 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:47:32.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:47:32 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:47:32 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:47:32 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:47:32.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:47:32 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1053: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:47:33 compute-0 sudo[256373]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:47:33 compute-0 sudo[256373]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:47:33 compute-0 sudo[256373]: pam_unix(sudo:session): session closed for user root
Nov 29 06:47:33 compute-0 sudo[256398]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:47:33 compute-0 sudo[256398]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:47:33 compute-0 sudo[256398]: pam_unix(sudo:session): session closed for user root
Nov 29 06:47:33 compute-0 sudo[256423]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:47:33 compute-0 sudo[256423]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:47:33 compute-0 sudo[256423]: pam_unix(sudo:session): session closed for user root
Nov 29 06:47:33 compute-0 sudo[256448]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 06:47:33 compute-0 sudo[256448]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:47:34 compute-0 ceph-mon[74654]: pgmap v1053: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:47:34 compute-0 sudo[256448]: pam_unix(sudo:session): session closed for user root
Nov 29 06:47:34 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:47:34 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:47:34 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:47:34.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:47:34 compute-0 sudo[256504]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:47:34 compute-0 sudo[256506]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:47:34 compute-0 sudo[256504]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:47:34 compute-0 sudo[256506]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:47:34 compute-0 sudo[256504]: pam_unix(sudo:session): session closed for user root
Nov 29 06:47:34 compute-0 sudo[256506]: pam_unix(sudo:session): session closed for user root
Nov 29 06:47:34 compute-0 sudo[256555]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:47:34 compute-0 sudo[256555]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:47:34 compute-0 sudo[256555]: pam_unix(sudo:session): session closed for user root
Nov 29 06:47:34 compute-0 sudo[256563]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:47:34 compute-0 sudo[256563]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:47:34 compute-0 sudo[256563]: pam_unix(sudo:session): session closed for user root
Nov 29 06:47:34 compute-0 podman[256551]: 2025-11-29 06:47:34.375593766 +0000 UTC m=+0.090058547 container health_status 843911ed0b6203707f0633a7e737420fbf54d55170a2d9cdc86db1752ff76af8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 06:47:34 compute-0 sudo[256625]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:47:34 compute-0 sudo[256625]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:47:34 compute-0 sudo[256625]: pam_unix(sudo:session): session closed for user root
Nov 29 06:47:34 compute-0 sudo[256650]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 list-networks
Nov 29 06:47:34 compute-0 sudo[256650]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:47:34 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:47:34 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:47:34 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:47:34.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:47:34 compute-0 sudo[256650]: pam_unix(sudo:session): session closed for user root
Nov 29 06:47:34 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 06:47:34 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:47:34 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 06:47:34 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:47:34 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 06:47:34 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:47:34 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 06:47:34 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 06:47:34 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 06:47:34 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:47:34 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev 471f119a-b633-4d9b-94d7-ed38201d0d2c does not exist
Nov 29 06:47:34 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev 9a54f098-65fa-433a-a3fa-b4fcd566d4a9 does not exist
Nov 29 06:47:34 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev 1a180563-9d74-4b5d-aa7c-4e31174f9e85 does not exist
Nov 29 06:47:34 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 06:47:34 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 06:47:34 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 06:47:34 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 06:47:34 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 06:47:34 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:47:34 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1054: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:47:34 compute-0 sudo[256693]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:47:34 compute-0 sudo[256693]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:47:34 compute-0 sudo[256693]: pam_unix(sudo:session): session closed for user root
Nov 29 06:47:35 compute-0 sudo[256718]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:47:35 compute-0 sudo[256718]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:47:35 compute-0 sudo[256718]: pam_unix(sudo:session): session closed for user root
Nov 29 06:47:35 compute-0 sudo[256744]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:47:35 compute-0 sudo[256744]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:47:35 compute-0 sudo[256744]: pam_unix(sudo:session): session closed for user root
Nov 29 06:47:35 compute-0 sudo[256769]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Nov 29 06:47:35 compute-0 sudo[256769]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:47:35 compute-0 podman[256837]: 2025-11-29 06:47:35.728011662 +0000 UTC m=+0.080265284 container create 9310e15a3b7d2d7c25de1c6228ba7e8e1e559a93eb369cc07311e071db1d7916 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_varahamihira, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 06:47:35 compute-0 podman[256837]: 2025-11-29 06:47:35.675999446 +0000 UTC m=+0.028253118 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:47:35 compute-0 systemd[1]: Started libpod-conmon-9310e15a3b7d2d7c25de1c6228ba7e8e1e559a93eb369cc07311e071db1d7916.scope.
Nov 29 06:47:35 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:47:35 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:47:35 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:47:35 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:47:35 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 06:47:35 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:47:35 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 06:47:35 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 06:47:35 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:47:35 compute-0 ceph-mon[74654]: pgmap v1054: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:47:35 compute-0 podman[256837]: 2025-11-29 06:47:35.847399445 +0000 UTC m=+0.199653117 container init 9310e15a3b7d2d7c25de1c6228ba7e8e1e559a93eb369cc07311e071db1d7916 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_varahamihira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 29 06:47:35 compute-0 podman[256837]: 2025-11-29 06:47:35.856683663 +0000 UTC m=+0.208937245 container start 9310e15a3b7d2d7c25de1c6228ba7e8e1e559a93eb369cc07311e071db1d7916 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_varahamihira, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 29 06:47:35 compute-0 podman[256837]: 2025-11-29 06:47:35.860049017 +0000 UTC m=+0.212302599 container attach 9310e15a3b7d2d7c25de1c6228ba7e8e1e559a93eb369cc07311e071db1d7916 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_varahamihira, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 06:47:35 compute-0 systemd[1]: libpod-9310e15a3b7d2d7c25de1c6228ba7e8e1e559a93eb369cc07311e071db1d7916.scope: Deactivated successfully.
Nov 29 06:47:35 compute-0 infallible_varahamihira[256854]: 167 167
Nov 29 06:47:35 compute-0 conmon[256854]: conmon 9310e15a3b7d2d7c25de <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9310e15a3b7d2d7c25de1c6228ba7e8e1e559a93eb369cc07311e071db1d7916.scope/container/memory.events
Nov 29 06:47:35 compute-0 podman[256837]: 2025-11-29 06:47:35.868559463 +0000 UTC m=+0.220813045 container died 9310e15a3b7d2d7c25de1c6228ba7e8e1e559a93eb369cc07311e071db1d7916 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_varahamihira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 06:47:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-67530e5dfb42d684f3b9e30de3465973fc13fa3048804ff72840ae99267adbd3-merged.mount: Deactivated successfully.
Nov 29 06:47:35 compute-0 podman[256837]: 2025-11-29 06:47:35.922441903 +0000 UTC m=+0.274695485 container remove 9310e15a3b7d2d7c25de1c6228ba7e8e1e559a93eb369cc07311e071db1d7916 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_varahamihira, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 06:47:35 compute-0 systemd[1]: libpod-conmon-9310e15a3b7d2d7c25de1c6228ba7e8e1e559a93eb369cc07311e071db1d7916.scope: Deactivated successfully.
Nov 29 06:47:36 compute-0 podman[256879]: 2025-11-29 06:47:36.155940551 +0000 UTC m=+0.073154067 container create 41623afd2ea0e92cd21bef143e27b9dd6a8d4c91e73fec8fe447b251b114148c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_northcutt, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 29 06:47:36 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:47:36 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:47:36 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:47:36.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:47:36 compute-0 systemd[1]: Started libpod-conmon-41623afd2ea0e92cd21bef143e27b9dd6a8d4c91e73fec8fe447b251b114148c.scope.
Nov 29 06:47:36 compute-0 podman[256879]: 2025-11-29 06:47:36.131159331 +0000 UTC m=+0.048372937 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:47:36 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:47:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ad2485ff93bf183fea30ac9779b67a9228fced5a54fde6e9caa2f00d41de9d3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 06:47:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ad2485ff93bf183fea30ac9779b67a9228fced5a54fde6e9caa2f00d41de9d3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:47:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ad2485ff93bf183fea30ac9779b67a9228fced5a54fde6e9caa2f00d41de9d3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:47:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ad2485ff93bf183fea30ac9779b67a9228fced5a54fde6e9caa2f00d41de9d3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 06:47:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ad2485ff93bf183fea30ac9779b67a9228fced5a54fde6e9caa2f00d41de9d3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 06:47:36 compute-0 podman[256879]: 2025-11-29 06:47:36.251186102 +0000 UTC m=+0.168399638 container init 41623afd2ea0e92cd21bef143e27b9dd6a8d4c91e73fec8fe447b251b114148c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_northcutt, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 06:47:36 compute-0 podman[256879]: 2025-11-29 06:47:36.259069811 +0000 UTC m=+0.176283357 container start 41623afd2ea0e92cd21bef143e27b9dd6a8d4c91e73fec8fe447b251b114148c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_northcutt, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 29 06:47:36 compute-0 podman[256879]: 2025-11-29 06:47:36.262902768 +0000 UTC m=+0.180116314 container attach 41623afd2ea0e92cd21bef143e27b9dd6a8d4c91e73fec8fe447b251b114148c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_northcutt, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 06:47:36 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:47:36 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:47:36 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:47:36.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:47:36 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1055: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:47:37 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:47:37 compute-0 reverent_northcutt[256895]: --> passed data devices: 0 physical, 1 LVM
Nov 29 06:47:37 compute-0 reverent_northcutt[256895]: --> relative data size: 1.0
Nov 29 06:47:37 compute-0 reverent_northcutt[256895]: --> All data devices are unavailable
Nov 29 06:47:37 compute-0 systemd[1]: libpod-41623afd2ea0e92cd21bef143e27b9dd6a8d4c91e73fec8fe447b251b114148c.scope: Deactivated successfully.
Nov 29 06:47:37 compute-0 podman[256879]: 2025-11-29 06:47:37.091740114 +0000 UTC m=+1.008953710 container died 41623afd2ea0e92cd21bef143e27b9dd6a8d4c91e73fec8fe447b251b114148c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_northcutt, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 06:47:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-7ad2485ff93bf183fea30ac9779b67a9228fced5a54fde6e9caa2f00d41de9d3-merged.mount: Deactivated successfully.
Nov 29 06:47:37 compute-0 podman[256879]: 2025-11-29 06:47:37.160434116 +0000 UTC m=+1.077647622 container remove 41623afd2ea0e92cd21bef143e27b9dd6a8d4c91e73fec8fe447b251b114148c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_northcutt, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 06:47:37 compute-0 systemd[1]: libpod-conmon-41623afd2ea0e92cd21bef143e27b9dd6a8d4c91e73fec8fe447b251b114148c.scope: Deactivated successfully.
Nov 29 06:47:37 compute-0 sudo[256769]: pam_unix(sudo:session): session closed for user root
Nov 29 06:47:37 compute-0 sudo[256923]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:47:37 compute-0 sudo[256923]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:47:37 compute-0 sudo[256923]: pam_unix(sudo:session): session closed for user root
Nov 29 06:47:37 compute-0 sudo[256948]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:47:37 compute-0 sudo[256948]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:47:37 compute-0 sudo[256948]: pam_unix(sudo:session): session closed for user root
Nov 29 06:47:37 compute-0 sudo[256973]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:47:37 compute-0 sudo[256973]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:47:37 compute-0 sudo[256973]: pam_unix(sudo:session): session closed for user root
Nov 29 06:47:37 compute-0 sudo[256998]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -- lvm list --format json
Nov 29 06:47:37 compute-0 sudo[256998]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:47:37 compute-0 podman[257064]: 2025-11-29 06:47:37.882681075 +0000 UTC m=+0.036515277 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:47:38 compute-0 podman[257064]: 2025-11-29 06:47:38.102199374 +0000 UTC m=+0.256033536 container create 8a2a0427995a1153c34c0a5d1ed060d56c2c028a9d33c79f47e03679ef5ef833 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_ishizaka, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 29 06:47:38 compute-0 ceph-mon[74654]: pgmap v1055: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:47:38 compute-0 systemd[1]: Started libpod-conmon-8a2a0427995a1153c34c0a5d1ed060d56c2c028a9d33c79f47e03679ef5ef833.scope.
Nov 29 06:47:38 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:47:38 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:47:38 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:47:38.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:47:38 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:47:38 compute-0 podman[257064]: 2025-11-29 06:47:38.206034774 +0000 UTC m=+0.359868966 container init 8a2a0427995a1153c34c0a5d1ed060d56c2c028a9d33c79f47e03679ef5ef833 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_ishizaka, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 06:47:38 compute-0 podman[257064]: 2025-11-29 06:47:38.215008503 +0000 UTC m=+0.368842635 container start 8a2a0427995a1153c34c0a5d1ed060d56c2c028a9d33c79f47e03679ef5ef833 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_ishizaka, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507)
Nov 29 06:47:38 compute-0 podman[257064]: 2025-11-29 06:47:38.218779898 +0000 UTC m=+0.372614090 container attach 8a2a0427995a1153c34c0a5d1ed060d56c2c028a9d33c79f47e03679ef5ef833 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_ishizaka, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 06:47:38 compute-0 intelligent_ishizaka[257081]: 167 167
Nov 29 06:47:38 compute-0 systemd[1]: libpod-8a2a0427995a1153c34c0a5d1ed060d56c2c028a9d33c79f47e03679ef5ef833.scope: Deactivated successfully.
Nov 29 06:47:38 compute-0 podman[257064]: 2025-11-29 06:47:38.221972007 +0000 UTC m=+0.375806229 container died 8a2a0427995a1153c34c0a5d1ed060d56c2c028a9d33c79f47e03679ef5ef833 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_ishizaka, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 06:47:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-1c7568c48b7d6eea3f57d9b88c637fb1c541f790323157244cc8b927b609526e-merged.mount: Deactivated successfully.
Nov 29 06:47:38 compute-0 podman[257064]: 2025-11-29 06:47:38.299415132 +0000 UTC m=+0.453249264 container remove 8a2a0427995a1153c34c0a5d1ed060d56c2c028a9d33c79f47e03679ef5ef833 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_ishizaka, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 29 06:47:38 compute-0 systemd[1]: libpod-conmon-8a2a0427995a1153c34c0a5d1ed060d56c2c028a9d33c79f47e03679ef5ef833.scope: Deactivated successfully.
Nov 29 06:47:38 compute-0 podman[257104]: 2025-11-29 06:47:38.515043303 +0000 UTC m=+0.063997842 container create 2ada4587b163ca2da9e49afc680aa11fa5b717d570bb52beb0e1cfb376054e37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_montalcini, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 06:47:38 compute-0 systemd[1]: Started libpod-conmon-2ada4587b163ca2da9e49afc680aa11fa5b717d570bb52beb0e1cfb376054e37.scope.
Nov 29 06:47:38 compute-0 podman[257104]: 2025-11-29 06:47:38.492376413 +0000 UTC m=+0.041330942 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:47:38 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:47:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e689d64a8b3b278bd8d52ab8853fd81b576ccc304876733532187709f7364b49/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 06:47:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e689d64a8b3b278bd8d52ab8853fd81b576ccc304876733532187709f7364b49/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:47:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e689d64a8b3b278bd8d52ab8853fd81b576ccc304876733532187709f7364b49/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:47:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e689d64a8b3b278bd8d52ab8853fd81b576ccc304876733532187709f7364b49/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 06:47:38 compute-0 podman[257104]: 2025-11-29 06:47:38.620457367 +0000 UTC m=+0.169411896 container init 2ada4587b163ca2da9e49afc680aa11fa5b717d570bb52beb0e1cfb376054e37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_montalcini, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 06:47:38 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:47:38 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:47:38 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:47:38.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:47:38 compute-0 podman[257104]: 2025-11-29 06:47:38.634470907 +0000 UTC m=+0.183425446 container start 2ada4587b163ca2da9e49afc680aa11fa5b717d570bb52beb0e1cfb376054e37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_montalcini, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 29 06:47:38 compute-0 podman[257104]: 2025-11-29 06:47:38.639105826 +0000 UTC m=+0.188060425 container attach 2ada4587b163ca2da9e49afc680aa11fa5b717d570bb52beb0e1cfb376054e37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_montalcini, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 29 06:47:38 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1056: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:47:39 compute-0 eloquent_montalcini[257121]: {
Nov 29 06:47:39 compute-0 eloquent_montalcini[257121]:     "1": [
Nov 29 06:47:39 compute-0 eloquent_montalcini[257121]:         {
Nov 29 06:47:39 compute-0 eloquent_montalcini[257121]:             "devices": [
Nov 29 06:47:39 compute-0 eloquent_montalcini[257121]:                 "/dev/loop3"
Nov 29 06:47:39 compute-0 eloquent_montalcini[257121]:             ],
Nov 29 06:47:39 compute-0 eloquent_montalcini[257121]:             "lv_name": "ceph_lv0",
Nov 29 06:47:39 compute-0 eloquent_montalcini[257121]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 06:47:39 compute-0 eloquent_montalcini[257121]:             "lv_size": "7511998464",
Nov 29 06:47:39 compute-0 eloquent_montalcini[257121]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=336ec58c-893b-528f-a0c1-6ed1196bc047,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=91f280f1-e534-4adc-bf70-98711580c2dd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 06:47:39 compute-0 eloquent_montalcini[257121]:             "lv_uuid": "G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP",
Nov 29 06:47:39 compute-0 eloquent_montalcini[257121]:             "name": "ceph_lv0",
Nov 29 06:47:39 compute-0 eloquent_montalcini[257121]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 06:47:39 compute-0 eloquent_montalcini[257121]:             "tags": {
Nov 29 06:47:39 compute-0 eloquent_montalcini[257121]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 06:47:39 compute-0 eloquent_montalcini[257121]:                 "ceph.block_uuid": "G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP",
Nov 29 06:47:39 compute-0 eloquent_montalcini[257121]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 06:47:39 compute-0 eloquent_montalcini[257121]:                 "ceph.cluster_fsid": "336ec58c-893b-528f-a0c1-6ed1196bc047",
Nov 29 06:47:39 compute-0 eloquent_montalcini[257121]:                 "ceph.cluster_name": "ceph",
Nov 29 06:47:39 compute-0 eloquent_montalcini[257121]:                 "ceph.crush_device_class": "",
Nov 29 06:47:39 compute-0 eloquent_montalcini[257121]:                 "ceph.encrypted": "0",
Nov 29 06:47:39 compute-0 eloquent_montalcini[257121]:                 "ceph.osd_fsid": "91f280f1-e534-4adc-bf70-98711580c2dd",
Nov 29 06:47:39 compute-0 eloquent_montalcini[257121]:                 "ceph.osd_id": "1",
Nov 29 06:47:39 compute-0 eloquent_montalcini[257121]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 06:47:39 compute-0 eloquent_montalcini[257121]:                 "ceph.type": "block",
Nov 29 06:47:39 compute-0 eloquent_montalcini[257121]:                 "ceph.vdo": "0"
Nov 29 06:47:39 compute-0 eloquent_montalcini[257121]:             },
Nov 29 06:47:39 compute-0 eloquent_montalcini[257121]:             "type": "block",
Nov 29 06:47:39 compute-0 eloquent_montalcini[257121]:             "vg_name": "ceph_vg0"
Nov 29 06:47:39 compute-0 eloquent_montalcini[257121]:         }
Nov 29 06:47:39 compute-0 eloquent_montalcini[257121]:     ]
Nov 29 06:47:39 compute-0 eloquent_montalcini[257121]: }
Nov 29 06:47:39 compute-0 systemd[1]: libpod-2ada4587b163ca2da9e49afc680aa11fa5b717d570bb52beb0e1cfb376054e37.scope: Deactivated successfully.
Nov 29 06:47:39 compute-0 podman[257104]: 2025-11-29 06:47:39.412079517 +0000 UTC m=+0.961034036 container died 2ada4587b163ca2da9e49afc680aa11fa5b717d570bb52beb0e1cfb376054e37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_montalcini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 29 06:47:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-e689d64a8b3b278bd8d52ab8853fd81b576ccc304876733532187709f7364b49-merged.mount: Deactivated successfully.
Nov 29 06:47:39 compute-0 podman[257104]: 2025-11-29 06:47:39.481698804 +0000 UTC m=+1.030653393 container remove 2ada4587b163ca2da9e49afc680aa11fa5b717d570bb52beb0e1cfb376054e37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_montalcini, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 06:47:39 compute-0 systemd[1]: libpod-conmon-2ada4587b163ca2da9e49afc680aa11fa5b717d570bb52beb0e1cfb376054e37.scope: Deactivated successfully.
Nov 29 06:47:39 compute-0 sudo[256998]: pam_unix(sudo:session): session closed for user root
Nov 29 06:47:39 compute-0 sudo[257144]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:47:39 compute-0 sudo[257144]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:47:39 compute-0 sudo[257144]: pam_unix(sudo:session): session closed for user root
Nov 29 06:47:39 compute-0 sudo[257169]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:47:39 compute-0 sudo[257169]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:47:39 compute-0 sudo[257169]: pam_unix(sudo:session): session closed for user root
Nov 29 06:47:39 compute-0 sudo[257194]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:47:39 compute-0 sudo[257194]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:47:39 compute-0 sudo[257194]: pam_unix(sudo:session): session closed for user root
Nov 29 06:47:39 compute-0 sudo[257219]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -- raw list --format json
Nov 29 06:47:39 compute-0 sudo[257219]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:47:40 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:47:40 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:47:40 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:47:40.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:47:40 compute-0 ceph-mon[74654]: pgmap v1056: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:47:40 compute-0 podman[257285]: 2025-11-29 06:47:40.350131072 +0000 UTC m=+0.056674398 container create 39c34fffe0cb444d2d4923cc7d46f283c89f80b37cc6cb4313a1a81c67277481 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_elgamal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 29 06:47:40 compute-0 systemd[1]: Started libpod-conmon-39c34fffe0cb444d2d4923cc7d46f283c89f80b37cc6cb4313a1a81c67277481.scope.
Nov 29 06:47:40 compute-0 podman[257285]: 2025-11-29 06:47:40.329783336 +0000 UTC m=+0.036326662 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:47:40 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:47:40 compute-0 podman[257285]: 2025-11-29 06:47:40.461759549 +0000 UTC m=+0.168302895 container init 39c34fffe0cb444d2d4923cc7d46f283c89f80b37cc6cb4313a1a81c67277481 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_elgamal, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 29 06:47:40 compute-0 podman[257285]: 2025-11-29 06:47:40.472670812 +0000 UTC m=+0.179214108 container start 39c34fffe0cb444d2d4923cc7d46f283c89f80b37cc6cb4313a1a81c67277481 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_elgamal, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 29 06:47:40 compute-0 podman[257285]: 2025-11-29 06:47:40.476397256 +0000 UTC m=+0.182940642 container attach 39c34fffe0cb444d2d4923cc7d46f283c89f80b37cc6cb4313a1a81c67277481 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_elgamal, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 29 06:47:40 compute-0 vigorous_elgamal[257301]: 167 167
Nov 29 06:47:40 compute-0 systemd[1]: libpod-39c34fffe0cb444d2d4923cc7d46f283c89f80b37cc6cb4313a1a81c67277481.scope: Deactivated successfully.
Nov 29 06:47:40 compute-0 podman[257285]: 2025-11-29 06:47:40.481103047 +0000 UTC m=+0.187646373 container died 39c34fffe0cb444d2d4923cc7d46f283c89f80b37cc6cb4313a1a81c67277481 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_elgamal, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 06:47:40 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:47:40 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:47:40 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:47:40.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:47:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-d893f7e6f73deaf3baf51564eb4f8c9a2eb4b91434d2c3fa76eaf4986a048961-merged.mount: Deactivated successfully.
Nov 29 06:47:40 compute-0 podman[257285]: 2025-11-29 06:47:40.72627314 +0000 UTC m=+0.432816456 container remove 39c34fffe0cb444d2d4923cc7d46f283c89f80b37cc6cb4313a1a81c67277481 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_elgamal, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 06:47:40 compute-0 systemd[1]: libpod-conmon-39c34fffe0cb444d2d4923cc7d46f283c89f80b37cc6cb4313a1a81c67277481.scope: Deactivated successfully.
Nov 29 06:47:40 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1057: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:47:40 compute-0 podman[257325]: 2025-11-29 06:47:40.994288259 +0000 UTC m=+0.072147969 container create 4ca449010ae47b96a671d2815a8eff02f15ba745bc36ce0008601c67b6d74f39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_haibt, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 06:47:41 compute-0 systemd[1]: Started libpod-conmon-4ca449010ae47b96a671d2815a8eff02f15ba745bc36ce0008601c67b6d74f39.scope.
Nov 29 06:47:41 compute-0 podman[257325]: 2025-11-29 06:47:40.965275041 +0000 UTC m=+0.043134791 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:47:41 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:47:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad06777c97760bfdfa940a41cc7ce92843e043d9b2f545bffda7df0cc915d7f3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 06:47:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad06777c97760bfdfa940a41cc7ce92843e043d9b2f545bffda7df0cc915d7f3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:47:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad06777c97760bfdfa940a41cc7ce92843e043d9b2f545bffda7df0cc915d7f3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:47:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad06777c97760bfdfa940a41cc7ce92843e043d9b2f545bffda7df0cc915d7f3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 06:47:41 compute-0 podman[257325]: 2025-11-29 06:47:41.114031951 +0000 UTC m=+0.191891671 container init 4ca449010ae47b96a671d2815a8eff02f15ba745bc36ce0008601c67b6d74f39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_haibt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 06:47:41 compute-0 podman[257325]: 2025-11-29 06:47:41.129251524 +0000 UTC m=+0.207111194 container start 4ca449010ae47b96a671d2815a8eff02f15ba745bc36ce0008601c67b6d74f39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_haibt, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 06:47:41 compute-0 podman[257325]: 2025-11-29 06:47:41.133239375 +0000 UTC m=+0.211099085 container attach 4ca449010ae47b96a671d2815a8eff02f15ba745bc36ce0008601c67b6d74f39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_haibt, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 06:47:41 compute-0 ceph-mon[74654]: pgmap v1057: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:47:42 compute-0 lucid_haibt[257342]: {
Nov 29 06:47:42 compute-0 lucid_haibt[257342]:     "91f280f1-e534-4adc-bf70-98711580c2dd": {
Nov 29 06:47:42 compute-0 lucid_haibt[257342]:         "ceph_fsid": "336ec58c-893b-528f-a0c1-6ed1196bc047",
Nov 29 06:47:42 compute-0 lucid_haibt[257342]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 06:47:42 compute-0 lucid_haibt[257342]:         "osd_id": 1,
Nov 29 06:47:42 compute-0 lucid_haibt[257342]:         "osd_uuid": "91f280f1-e534-4adc-bf70-98711580c2dd",
Nov 29 06:47:42 compute-0 lucid_haibt[257342]:         "type": "bluestore"
Nov 29 06:47:42 compute-0 lucid_haibt[257342]:     }
Nov 29 06:47:42 compute-0 lucid_haibt[257342]: }
Nov 29 06:47:42 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:47:42 compute-0 systemd[1]: libpod-4ca449010ae47b96a671d2815a8eff02f15ba745bc36ce0008601c67b6d74f39.scope: Deactivated successfully.
Nov 29 06:47:42 compute-0 podman[257325]: 2025-11-29 06:47:42.080499777 +0000 UTC m=+1.158359477 container died 4ca449010ae47b96a671d2815a8eff02f15ba745bc36ce0008601c67b6d74f39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_haibt, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 29 06:47:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-ad06777c97760bfdfa940a41cc7ce92843e043d9b2f545bffda7df0cc915d7f3-merged.mount: Deactivated successfully.
Nov 29 06:47:42 compute-0 podman[257325]: 2025-11-29 06:47:42.161360628 +0000 UTC m=+1.239220318 container remove 4ca449010ae47b96a671d2815a8eff02f15ba745bc36ce0008601c67b6d74f39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_haibt, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 06:47:42 compute-0 systemd[1]: libpod-conmon-4ca449010ae47b96a671d2815a8eff02f15ba745bc36ce0008601c67b6d74f39.scope: Deactivated successfully.
Nov 29 06:47:42 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:47:42 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:47:42 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:47:42.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:47:42 compute-0 sudo[257219]: pam_unix(sudo:session): session closed for user root
Nov 29 06:47:42 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 06:47:42 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:47:42 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 06:47:42 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:47:42 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev 7b0f484b-2a72-4c25-925c-dd274e0d91e6 does not exist
Nov 29 06:47:42 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev cb2b8c06-3606-4b5a-ba6e-158aae37996f does not exist
Nov 29 06:47:42 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev 26af0bdc-9835-42c7-9ee1-fda5b9697031 does not exist
Nov 29 06:47:42 compute-0 sudo[257378]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:47:42 compute-0 sudo[257378]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:47:42 compute-0 sudo[257378]: pam_unix(sudo:session): session closed for user root
Nov 29 06:47:42 compute-0 sudo[257403]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 06:47:42 compute-0 sudo[257403]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:47:42 compute-0 sudo[257403]: pam_unix(sudo:session): session closed for user root
Nov 29 06:47:42 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:47:42 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:47:42 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:47:42.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:47:42 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1058: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:47:43 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:47:43 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:47:43 compute-0 ceph-mon[74654]: pgmap v1058: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:47:43 compute-0 sshd-session[257358]: Received disconnect from 118.193.39.127 port 44634:11: Bye Bye [preauth]
Nov 29 06:47:43 compute-0 sshd-session[257358]: Disconnected from authenticating user root 118.193.39.127 port 44634 [preauth]
Nov 29 06:47:44 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:47:44 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:47:44 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:47:44.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:47:44 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:47:44 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:47:44 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:47:44.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:47:44 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1059: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:47:45 compute-0 podman[257431]: 2025-11-29 06:47:45.142697455 +0000 UTC m=+0.093144033 container health_status 81ea2bcb89266a0110a379c2083d8cc042460d4a35c8ed3bf349dd1083925000 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Nov 29 06:47:45 compute-0 sshd-session[257429]: Invalid user azureuser from 162.214.92.14 port 48916
Nov 29 06:47:45 compute-0 sshd-session[257429]: Received disconnect from 162.214.92.14 port 48916:11: Bye Bye [preauth]
Nov 29 06:47:45 compute-0 sshd-session[257429]: Disconnected from invalid user azureuser 162.214.92.14 port 48916 [preauth]
Nov 29 06:47:45 compute-0 podman[257453]: 2025-11-29 06:47:45.480753953 +0000 UTC m=+0.132719795 container health_status b3f42e9a710907b47913576d27471d163da731262c1464357cff24681ce600c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 06:47:46 compute-0 ceph-mon[74654]: pgmap v1059: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:47:46 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:47:46 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:47:46 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:47:46.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:47:46 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:47:46 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:47:46 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:47:46.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:47:46 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1060: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:47:47 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:47:47 compute-0 ceph-mon[74654]: pgmap v1060: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:47:48 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:47:48 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:47:48 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:47:48.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:47:48 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:47:48 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:47:48 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:47:48.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:47:48 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1061: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:47:49 compute-0 rsyslogd[1007]: imjournal: 2575 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Nov 29 06:47:50 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:47:50 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:47:50 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:47:50.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:47:50 compute-0 ceph-mon[74654]: pgmap v1061: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:47:50 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:47:50 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:47:50 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:47:50.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:47:50 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1062: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:47:51 compute-0 ceph-mon[74654]: pgmap v1062: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:47:52 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:47:52 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:47:52 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:47:52 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:47:52.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:47:52 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:47:52 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:47:52 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:47:52.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:47:52 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1063: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:47:54 compute-0 ceph-mon[74654]: pgmap v1063: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:47:54 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:47:54 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:47:54 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:47:54.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:47:54 compute-0 ceph-mgr[74948]: [balancer INFO root] Optimize plan auto_2025-11-29_06:47:54
Nov 29 06:47:54 compute-0 ceph-mgr[74948]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 06:47:54 compute-0 ceph-mgr[74948]: [balancer INFO root] do_upmap
Nov 29 06:47:54 compute-0 ceph-mgr[74948]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.meta', 'backups', '.mgr', 'default.rgw.log', 'vms', 'cephfs.cephfs.data', 'images', 'volumes', 'default.rgw.meta', '.rgw.root']
Nov 29 06:47:54 compute-0 ceph-mgr[74948]: [balancer INFO root] prepared 0/10 changes
Nov 29 06:47:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:47:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:47:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:47:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:47:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:47:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:47:54 compute-0 sudo[257485]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:47:54 compute-0 sudo[257485]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:47:54 compute-0 sudo[257485]: pam_unix(sudo:session): session closed for user root
Nov 29 06:47:54 compute-0 sudo[257510]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:47:54 compute-0 sudo[257510]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:47:54 compute-0 sudo[257510]: pam_unix(sudo:session): session closed for user root
Nov 29 06:47:54 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:47:54 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:47:54 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:47:54.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:47:54 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1064: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:47:55 compute-0 ceph-mon[74654]: pgmap v1064: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:47:56 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:47:56 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:47:56 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:47:56.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:47:56 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:47:56 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:47:56 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:47:56.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:47:56 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1065: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:47:57 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:47:57 compute-0 ceph-mon[74654]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #42. Immutable memtables: 0.
Nov 29 06:47:57 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:47:57.074075) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 06:47:57 compute-0 ceph-mon[74654]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 42
Nov 29 06:47:57 compute-0 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764398877074203, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 1681, "num_deletes": 251, "total_data_size": 3039443, "memory_usage": 3089224, "flush_reason": "Manual Compaction"}
Nov 29 06:47:57 compute-0 ceph-mon[74654]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #43: started
Nov 29 06:47:57 compute-0 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764398877151799, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 43, "file_size": 1737351, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18090, "largest_seqno": 19769, "table_properties": {"data_size": 1731823, "index_size": 2668, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 14351, "raw_average_key_size": 20, "raw_value_size": 1719513, "raw_average_value_size": 2428, "num_data_blocks": 123, "num_entries": 708, "num_filter_entries": 708, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764398703, "oldest_key_time": 1764398703, "file_creation_time": 1764398877, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cb6c8f8f-b3b4-4901-9b8e-6f9d7b0da908", "db_session_id": "VL4WOW4AK06DDHF5VQBP", "orig_file_number": 43, "seqno_to_time_mapping": "N/A"}}
Nov 29 06:47:57 compute-0 ceph-mon[74654]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 77870 microseconds, and 9912 cpu microseconds.
Nov 29 06:47:57 compute-0 ceph-mon[74654]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 06:47:57 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:47:57.151953) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #43: 1737351 bytes OK
Nov 29 06:47:57 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:47:57.151986) [db/memtable_list.cc:519] [default] Level-0 commit table #43 started
Nov 29 06:47:57 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:47:57.154962) [db/memtable_list.cc:722] [default] Level-0 commit table #43: memtable #1 done
Nov 29 06:47:57 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:47:57.154995) EVENT_LOG_v1 {"time_micros": 1764398877154986, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 06:47:57 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:47:57.155021) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 06:47:57 compute-0 ceph-mon[74654]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 3032417, prev total WAL file size 3032417, number of live WAL files 2.
Nov 29 06:47:57 compute-0 ceph-mon[74654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000039.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 06:47:57 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:47:57.156715) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400353033' seq:72057594037927935, type:22 .. '6D67727374617400373535' seq:0, type:0; will stop at (end)
Nov 29 06:47:57 compute-0 ceph-mon[74654]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 06:47:57 compute-0 ceph-mon[74654]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [43(1696KB)], [41(9238KB)]
Nov 29 06:47:57 compute-0 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764398877156799, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [43], "files_L6": [41], "score": -1, "input_data_size": 11197876, "oldest_snapshot_seqno": -1}
Nov 29 06:47:57 compute-0 ceph-mon[74654]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #44: 4809 keys, 8585958 bytes, temperature: kUnknown
Nov 29 06:47:57 compute-0 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764398877271951, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 44, "file_size": 8585958, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8553778, "index_size": 19078, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12037, "raw_key_size": 121166, "raw_average_key_size": 25, "raw_value_size": 8466674, "raw_average_value_size": 1760, "num_data_blocks": 783, "num_entries": 4809, "num_filter_entries": 4809, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764396963, "oldest_key_time": 0, "file_creation_time": 1764398877, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cb6c8f8f-b3b4-4901-9b8e-6f9d7b0da908", "db_session_id": "VL4WOW4AK06DDHF5VQBP", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}}
Nov 29 06:47:57 compute-0 ceph-mon[74654]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 06:47:57 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:47:57.272302) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 8585958 bytes
Nov 29 06:47:57 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:47:57.299866) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 97.1 rd, 74.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 9.0 +0.0 blob) out(8.2 +0.0 blob), read-write-amplify(11.4) write-amplify(4.9) OK, records in: 5243, records dropped: 434 output_compression: NoCompression
Nov 29 06:47:57 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:47:57.299942) EVENT_LOG_v1 {"time_micros": 1764398877299927, "job": 20, "event": "compaction_finished", "compaction_time_micros": 115286, "compaction_time_cpu_micros": 43319, "output_level": 6, "num_output_files": 1, "total_output_size": 8585958, "num_input_records": 5243, "num_output_records": 4809, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 06:47:57 compute-0 ceph-mon[74654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000043.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 06:47:57 compute-0 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764398877301066, "job": 20, "event": "table_file_deletion", "file_number": 43}
Nov 29 06:47:57 compute-0 ceph-mon[74654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 06:47:57 compute-0 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764398877304870, "job": 20, "event": "table_file_deletion", "file_number": 41}
Nov 29 06:47:57 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:47:57.156569) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 06:47:57 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:47:57.305078) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 06:47:57 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:47:57.305084) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 06:47:57 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:47:57.305087) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 06:47:57 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:47:57.305090) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 06:47:57 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:47:57.305092) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 06:47:58 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:47:58 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:47:58 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:47:58.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:47:58 compute-0 ceph-mon[74654]: pgmap v1065: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:47:58 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:47:58 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 06:47:58 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:47:58.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 06:47:58 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1066: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:47:59 compute-0 ceph-mon[74654]: pgmap v1066: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:48:00 compute-0 sshd-session[257537]: Invalid user desliga from 103.63.25.115 port 43466
Nov 29 06:48:00 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:48:00 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:48:00 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:48:00.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:48:00 compute-0 sshd-session[257537]: Received disconnect from 103.63.25.115 port 43466:11: Bye Bye [preauth]
Nov 29 06:48:00 compute-0 sshd-session[257537]: Disconnected from invalid user desliga 103.63.25.115 port 43466 [preauth]
Nov 29 06:48:00 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:48:00 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:48:00 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:48:00.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:48:00 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1067: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:48:02 compute-0 sshd-session[257540]: Invalid user train1 from 49.247.35.31 port 14856
Nov 29 06:48:02 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:48:02 compute-0 ceph-mon[74654]: pgmap v1067: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:48:02 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:48:02 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:48:02 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:48:02.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:48:02 compute-0 sshd-session[257540]: Received disconnect from 49.247.35.31 port 14856:11: Bye Bye [preauth]
Nov 29 06:48:02 compute-0 sshd-session[257540]: Disconnected from invalid user train1 49.247.35.31 port 14856 [preauth]
Nov 29 06:48:02 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:48:02 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:48:02 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:48:02.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:48:02 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1068: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:48:03 compute-0 ceph-mon[74654]: from='client.? 192.168.122.10:0/4080925924' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 06:48:03 compute-0 ceph-mon[74654]: from='client.? 192.168.122.10:0/4080925924' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 06:48:04 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:48:04 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:48:04 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:48:04.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:48:04 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:48:04 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:48:04 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:48:04.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:48:04 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1069: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:48:05 compute-0 podman[257544]: 2025-11-29 06:48:05.136819385 +0000 UTC m=+0.095521699 container health_status 843911ed0b6203707f0633a7e737420fbf54d55170a2d9cdc86db1752ff76af8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 29 06:48:05 compute-0 ceph-mon[74654]: pgmap v1068: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:48:06 compute-0 nova_compute[251877]: 2025-11-29 06:48:06.077 251881 WARNING oslo.service.loopingcall [-] Function 'nova.servicegroup.drivers.db.DbDriver._report_state' run outlasted interval by 15.85 sec
Nov 29 06:48:06 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:48:06 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:48:06 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:48:06.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:48:06 compute-0 ceph-mon[74654]: pgmap v1069: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:48:06 compute-0 sshd-session[257568]: Invalid user hello from 176.109.67.96 port 60198
Nov 29 06:48:06 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:48:06 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:48:06 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:48:06.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:48:06 compute-0 sshd-session[257568]: Received disconnect from 176.109.67.96 port 60198:11: Bye Bye [preauth]
Nov 29 06:48:06 compute-0 sshd-session[257568]: Disconnected from invalid user hello 176.109.67.96 port 60198 [preauth]
Nov 29 06:48:06 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1070: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:48:07 compute-0 sshd-session[257566]: Invalid user glenn from 34.92.81.41 port 49578
Nov 29 06:48:07 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:48:07 compute-0 sshd-session[257566]: Received disconnect from 34.92.81.41 port 49578:11: Bye Bye [preauth]
Nov 29 06:48:07 compute-0 sshd-session[257566]: Disconnected from invalid user glenn 34.92.81.41 port 49578 [preauth]
Nov 29 06:48:07 compute-0 ceph-mon[74654]: pgmap v1070: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:48:08 compute-0 sshd-session[257570]: Invalid user packer from 27.112.78.245 port 38944
Nov 29 06:48:08 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:48:08 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:48:08 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:48:08.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:48:08 compute-0 sshd-session[257570]: Received disconnect from 27.112.78.245 port 38944:11: Bye Bye [preauth]
Nov 29 06:48:08 compute-0 sshd-session[257570]: Disconnected from invalid user packer 27.112.78.245 port 38944 [preauth]
Nov 29 06:48:08 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:48:08 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:48:08 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:48:08.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:48:08 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1071: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:48:10 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:48:10 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:48:10 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:48:10.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:48:10 compute-0 ceph-mon[74654]: pgmap v1071: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:48:10 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:48:10 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:48:10 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:48:10.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:48:10 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1072: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:48:11 compute-0 ceph-mon[74654]: pgmap v1072: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:48:12 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:48:12 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:48:12 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 06:48:12 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:48:12.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 06:48:12 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:48:12 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:48:12 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:48:12.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:48:12 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1073: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:48:12 compute-0 nova_compute[251877]: 2025-11-29 06:48:12.958 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 06:48:12 compute-0 nova_compute[251877]: 2025-11-29 06:48:12.959 251881 DEBUG nova.compute.manager [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 29 06:48:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 06:48:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:48:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 06:48:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:48:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:48:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:48:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:48:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:48:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:48:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:48:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:48:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:48:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 06:48:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:48:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:48:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:48:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Nov 29 06:48:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:48:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 06:48:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:48:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:48:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:48:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 06:48:14 compute-0 ceph-mon[74654]: pgmap v1073: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:48:14 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:48:14 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:48:14 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:48:14.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:48:14 compute-0 sudo[257576]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:48:14 compute-0 sudo[257576]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:48:14 compute-0 sudo[257576]: pam_unix(sudo:session): session closed for user root
Nov 29 06:48:14 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:48:14 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:48:14 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:48:14.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:48:14 compute-0 sudo[257601]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:48:14 compute-0 sudo[257601]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:48:14 compute-0 sudo[257601]: pam_unix(sudo:session): session closed for user root
Nov 29 06:48:14 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1074: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:48:16 compute-0 podman[257627]: 2025-11-29 06:48:16.121728388 +0000 UTC m=+0.080435430 container health_status 81ea2bcb89266a0110a379c2083d8cc042460d4a35c8ed3bf349dd1083925000 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Nov 29 06:48:16 compute-0 podman[257628]: 2025-11-29 06:48:16.19007956 +0000 UTC m=+0.148211416 container health_status b3f42e9a710907b47913576d27471d163da731262c1464357cff24681ce600c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125)
Nov 29 06:48:16 compute-0 ceph-mon[74654]: pgmap v1074: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:48:16 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:48:16 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:48:16 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:48:16.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:48:16 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:48:16 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:48:16 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:48:16.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:48:16 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1075: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:48:17 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:48:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:48:17.237 157767 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 06:48:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:48:17.239 157767 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 06:48:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:48:17.239 157767 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 06:48:17 compute-0 ceph-mon[74654]: pgmap v1075: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:48:18 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:48:18 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:48:18 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:48:18.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:48:18 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:48:18 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:48:18 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:48:18.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:48:18 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1076: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:48:19 compute-0 ceph-mon[74654]: pgmap v1076: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:48:20 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:48:20 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:48:20 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:48:20.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:48:20 compute-0 nova_compute[251877]: 2025-11-29 06:48:20.353 251881 WARNING oslo.service.loopingcall [-] Function 'nova.servicegroup.drivers.db.DbDriver._report_state' run outlasted interval by 4.28 sec
Nov 29 06:48:20 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:48:20 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:48:20 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:48:20.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:48:20 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1077: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:48:22 compute-0 ceph-mon[74654]: pgmap v1077: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:48:22 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:48:22 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:48:22 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:48:22 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:48:22.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:48:22 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:48:22 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:48:22 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:48:22.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:48:22 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1078: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:48:23 compute-0 sshd-session[257673]: Invalid user debian from 45.78.221.93 port 51742
Nov 29 06:48:24 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:48:24 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 06:48:24 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:48:24.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 06:48:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:48:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:48:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:48:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:48:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:48:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:48:24 compute-0 ceph-mon[74654]: pgmap v1078: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:48:24 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:48:24 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:48:24 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:48:24.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:48:24 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1079: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:48:25 compute-0 ceph-mon[74654]: pgmap v1079: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:48:26 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:48:26 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:48:26 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:48:26.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:48:26 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:48:26 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 06:48:26 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:48:26.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 06:48:26 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1080: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:48:27 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:48:27 compute-0 sshd-session[257678]: Invalid user stack from 197.13.24.157 port 34578
Nov 29 06:48:27 compute-0 sshd-session[257678]: Received disconnect from 197.13.24.157 port 34578:11: Bye Bye [preauth]
Nov 29 06:48:27 compute-0 sshd-session[257678]: Disconnected from invalid user stack 197.13.24.157 port 34578 [preauth]
Nov 29 06:48:27 compute-0 sshd-session[257673]: Received disconnect from 45.78.221.93 port 51742:11: Bye Bye [preauth]
Nov 29 06:48:27 compute-0 sshd-session[257673]: Disconnected from invalid user debian 45.78.221.93 port 51742 [preauth]
Nov 29 06:48:28 compute-0 ceph-mon[74654]: pgmap v1080: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:48:28 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:48:28 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 06:48:28 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:48:28.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 06:48:28 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:48:28 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:48:28 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:48:28.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:48:28 compute-0 nova_compute[251877]: 2025-11-29 06:48:28.745 251881 DEBUG nova.compute.manager [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 29 06:48:28 compute-0 nova_compute[251877]: 2025-11-29 06:48:28.748 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 06:48:28 compute-0 nova_compute[251877]: 2025-11-29 06:48:28.748 251881 DEBUG nova.compute.manager [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 29 06:48:28 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1081: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:48:29 compute-0 ceph-mon[74654]: pgmap v1081: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:48:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 06:48:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 06:48:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 06:48:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 06:48:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 06:48:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 06:48:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 06:48:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 06:48:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 06:48:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 06:48:30 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:48:30 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:48:30 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:48:30.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:48:30 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:48:30 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:48:30 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:48:30.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:48:30 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1082: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:48:31 compute-0 ceph-mon[74654]: pgmap v1082: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:48:32 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:48:32 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:48:32 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:48:32 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:48:32.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:48:32 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:48:32 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:48:32 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:48:32.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:48:32 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1083: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:48:34 compute-0 ceph-mon[74654]: pgmap v1083: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:48:34 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:48:34 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:48:34 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:48:34.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:48:34 compute-0 sshd-session[257684]: Received disconnect from 193.163.72.91 port 48684:11: Bye Bye [preauth]
Nov 29 06:48:34 compute-0 sshd-session[257684]: Disconnected from authenticating user root 193.163.72.91 port 48684 [preauth]
Nov 29 06:48:34 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:48:34 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:48:34 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:48:34.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:48:34 compute-0 sudo[257686]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:48:34 compute-0 sudo[257686]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:48:34 compute-0 sudo[257686]: pam_unix(sudo:session): session closed for user root
Nov 29 06:48:34 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1084: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:48:34 compute-0 sudo[257711]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:48:34 compute-0 sudo[257711]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:48:34 compute-0 sudo[257711]: pam_unix(sudo:session): session closed for user root
Nov 29 06:48:35 compute-0 ceph-osd[85162]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 06:48:35 compute-0 ceph-osd[85162]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.8 total, 600.0 interval
                                           Cumulative writes: 9194 writes, 35K keys, 9194 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 9194 writes, 2074 syncs, 4.43 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 682 writes, 1062 keys, 682 commit groups, 1.0 writes per commit group, ingest: 0.34 MB, 0.00 MB/s
                                           Interval WAL: 682 writes, 328 syncs, 2.08 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 06:48:36 compute-0 podman[257737]: 2025-11-29 06:48:36.128515031 +0000 UTC m=+0.090289733 container health_status 843911ed0b6203707f0633a7e737420fbf54d55170a2d9cdc86db1752ff76af8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 06:48:36 compute-0 ceph-mon[74654]: pgmap v1084: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:48:36 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:48:36 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:48:36 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:48:36.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:48:36 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:48:36 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:48:36 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:48:36.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:48:36 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1085: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:48:37 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:48:38 compute-0 ceph-mon[74654]: pgmap v1085: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:48:38 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:48:38 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:48:38 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:48:38.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:48:38 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:48:38 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:48:38 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:48:38.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:48:38 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1086: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:48:40 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:48:40 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:48:40 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:48:40.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:48:40 compute-0 ceph-mon[74654]: pgmap v1086: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:48:40 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:48:40 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:48:40 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:48:40.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:48:40 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1087: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:48:41 compute-0 ceph-mon[74654]: pgmap v1087: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:48:42 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:48:42 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:48:42 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 06:48:42 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:48:42.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 06:48:42 compute-0 sshd-session[257761]: Invalid user jose from 103.143.238.173 port 36738
Nov 29 06:48:42 compute-0 sshd-session[257761]: Received disconnect from 103.143.238.173 port 36738:11: Bye Bye [preauth]
Nov 29 06:48:42 compute-0 sshd-session[257761]: Disconnected from invalid user jose 103.143.238.173 port 36738 [preauth]
Nov 29 06:48:42 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:48:42 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:48:42 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:48:42.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:48:42 compute-0 sudo[257763]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:48:42 compute-0 sudo[257763]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:48:42 compute-0 sudo[257763]: pam_unix(sudo:session): session closed for user root
Nov 29 06:48:42 compute-0 sudo[257788]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:48:42 compute-0 sudo[257788]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:48:42 compute-0 sudo[257788]: pam_unix(sudo:session): session closed for user root
Nov 29 06:48:42 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1088: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:48:42 compute-0 sudo[257813]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:48:42 compute-0 sudo[257813]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:48:42 compute-0 sudo[257813]: pam_unix(sudo:session): session closed for user root
Nov 29 06:48:43 compute-0 sudo[257838]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 29 06:48:43 compute-0 sudo[257838]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:48:43 compute-0 ceph-mon[74654]: pgmap v1088: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:48:43 compute-0 podman[257936]: 2025-11-29 06:48:43.674053947 +0000 UTC m=+0.093501783 container exec c3c8680245c67f710ba1b448e2d4c77c4c02bc368d31276f0332ad942957e3cf (image=quay.io/ceph/ceph:v18, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mon-compute-0, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 06:48:43 compute-0 podman[257936]: 2025-11-29 06:48:43.794639472 +0000 UTC m=+0.214087338 container exec_died c3c8680245c67f710ba1b448e2d4c77c4c02bc368d31276f0332ad942957e3cf (image=quay.io/ceph/ceph:v18, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 29 06:48:43 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 06:48:44 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:48:44 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 06:48:44 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:48:44 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:48:44 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:48:44 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:48:44.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:48:44 compute-0 podman[258089]: 2025-11-29 06:48:44.567097319 +0000 UTC m=+0.079368929 container exec f5b8edcc79df1f136246f04a71d5e10f6a214865dd4162430c1b6090267d988f (image=quay.io/ceph/haproxy:2.3, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-haproxy-rgw-default-compute-0-zzbnoj)
Nov 29 06:48:44 compute-0 podman[258089]: 2025-11-29 06:48:44.586298834 +0000 UTC m=+0.098570394 container exec_died f5b8edcc79df1f136246f04a71d5e10f6a214865dd4162430c1b6090267d988f (image=quay.io/ceph/haproxy:2.3, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-haproxy-rgw-default-compute-0-zzbnoj)
Nov 29 06:48:44 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:48:44 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:48:44 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:48:44.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:48:44 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 06:48:44 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:48:44 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 06:48:44 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:48:44 compute-0 podman[258153]: 2025-11-29 06:48:44.90310995 +0000 UTC m=+0.074457543 container exec c5da9d8380f0eb7ca78841b66eaacc1789ab9c8fb67eaab27657426fdf51bade (image=quay.io/ceph/keepalived:2.2.4, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-keepalived-rgw-default-compute-0-uyqrbs, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, distribution-scope=public, io.openshift.tags=Ceph keepalived, architecture=x86_64, release=1793, vendor=Red Hat, Inc., io.buildah.version=1.28.2, io.openshift.expose-services=, build-date=2023-02-22T09:23:20)
Nov 29 06:48:44 compute-0 podman[258153]: 2025-11-29 06:48:44.916096442 +0000 UTC m=+0.087444045 container exec_died c5da9d8380f0eb7ca78841b66eaacc1789ab9c8fb67eaab27657426fdf51bade (image=quay.io/ceph/keepalived:2.2.4, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-keepalived-rgw-default-compute-0-uyqrbs, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, release=1793, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.component=keepalived-container, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, distribution-scope=public, io.buildah.version=1.28.2, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=2.2.4, architecture=x86_64, io.openshift.tags=Ceph keepalived)
Nov 29 06:48:44 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1089: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:48:44 compute-0 sudo[257838]: pam_unix(sudo:session): session closed for user root
Nov 29 06:48:44 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 06:48:45 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:48:45 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 06:48:45 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:48:45 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:48:45 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:48:45 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:48:45 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:48:45 compute-0 ceph-mon[74654]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #45. Immutable memtables: 0.
Nov 29 06:48:45 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:48:45.074531) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 06:48:45 compute-0 ceph-mon[74654]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 45
Nov 29 06:48:45 compute-0 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764398925074604, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 654, "num_deletes": 251, "total_data_size": 886958, "memory_usage": 898536, "flush_reason": "Manual Compaction"}
Nov 29 06:48:45 compute-0 ceph-mon[74654]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #46: started
Nov 29 06:48:45 compute-0 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764398925089483, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 46, "file_size": 878455, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 19770, "largest_seqno": 20423, "table_properties": {"data_size": 874937, "index_size": 1426, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 7933, "raw_average_key_size": 19, "raw_value_size": 867897, "raw_average_value_size": 2121, "num_data_blocks": 62, "num_entries": 409, "num_filter_entries": 409, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764398878, "oldest_key_time": 1764398878, "file_creation_time": 1764398925, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cb6c8f8f-b3b4-4901-9b8e-6f9d7b0da908", "db_session_id": "VL4WOW4AK06DDHF5VQBP", "orig_file_number": 46, "seqno_to_time_mapping": "N/A"}}
Nov 29 06:48:45 compute-0 ceph-mon[74654]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 15144 microseconds, and 7122 cpu microseconds.
Nov 29 06:48:45 compute-0 ceph-mon[74654]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 06:48:45 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:48:45.089674) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #46: 878455 bytes OK
Nov 29 06:48:45 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:48:45.089768) [db/memtable_list.cc:519] [default] Level-0 commit table #46 started
Nov 29 06:48:45 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:48:45.091508) [db/memtable_list.cc:722] [default] Level-0 commit table #46: memtable #1 done
Nov 29 06:48:45 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:48:45.091566) EVENT_LOG_v1 {"time_micros": 1764398925091556, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 06:48:45 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:48:45.091593) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 06:48:45 compute-0 ceph-mon[74654]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 883557, prev total WAL file size 883557, number of live WAL files 2.
Nov 29 06:48:45 compute-0 ceph-mon[74654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000042.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 06:48:45 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:48:45.092951) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031323535' seq:72057594037927935, type:22 .. '7061786F730031353037' seq:0, type:0; will stop at (end)
Nov 29 06:48:45 compute-0 ceph-mon[74654]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 06:48:45 compute-0 ceph-mon[74654]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [46(857KB)], [44(8384KB)]
Nov 29 06:48:45 compute-0 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764398925093031, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [46], "files_L6": [44], "score": -1, "input_data_size": 9464413, "oldest_snapshot_seqno": -1}
Nov 29 06:48:45 compute-0 sudo[258188]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:48:45 compute-0 sudo[258188]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:48:45 compute-0 sudo[258188]: pam_unix(sudo:session): session closed for user root
Nov 29 06:48:45 compute-0 ceph-mon[74654]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #47: 4700 keys, 7370361 bytes, temperature: kUnknown
Nov 29 06:48:45 compute-0 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764398925159843, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 47, "file_size": 7370361, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7339958, "index_size": 17557, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11781, "raw_key_size": 119472, "raw_average_key_size": 25, "raw_value_size": 7255731, "raw_average_value_size": 1543, "num_data_blocks": 714, "num_entries": 4700, "num_filter_entries": 4700, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764396963, "oldest_key_time": 0, "file_creation_time": 1764398925, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cb6c8f8f-b3b4-4901-9b8e-6f9d7b0da908", "db_session_id": "VL4WOW4AK06DDHF5VQBP", "orig_file_number": 47, "seqno_to_time_mapping": "N/A"}}
Nov 29 06:48:45 compute-0 ceph-mon[74654]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 06:48:45 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:48:45.160179) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 7370361 bytes
Nov 29 06:48:45 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:48:45.161958) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 141.4 rd, 110.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 8.2 +0.0 blob) out(7.0 +0.0 blob), read-write-amplify(19.2) write-amplify(8.4) OK, records in: 5218, records dropped: 518 output_compression: NoCompression
Nov 29 06:48:45 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:48:45.161984) EVENT_LOG_v1 {"time_micros": 1764398925161972, "job": 22, "event": "compaction_finished", "compaction_time_micros": 66937, "compaction_time_cpu_micros": 31632, "output_level": 6, "num_output_files": 1, "total_output_size": 7370361, "num_input_records": 5218, "num_output_records": 4700, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 06:48:45 compute-0 ceph-mon[74654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000046.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 06:48:45 compute-0 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764398925163064, "job": 22, "event": "table_file_deletion", "file_number": 46}
Nov 29 06:48:45 compute-0 ceph-mon[74654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 06:48:45 compute-0 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764398925166350, "job": 22, "event": "table_file_deletion", "file_number": 44}
Nov 29 06:48:45 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:48:45.092836) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 06:48:45 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:48:45.166397) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 06:48:45 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:48:45.166404) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 06:48:45 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:48:45.166407) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 06:48:45 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:48:45.166410) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 06:48:45 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:48:45.166413) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 06:48:45 compute-0 sudo[258213]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:48:45 compute-0 sudo[258213]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:48:45 compute-0 sudo[258213]: pam_unix(sudo:session): session closed for user root
Nov 29 06:48:45 compute-0 sudo[258238]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:48:45 compute-0 sudo[258238]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:48:45 compute-0 sudo[258238]: pam_unix(sudo:session): session closed for user root
Nov 29 06:48:45 compute-0 sudo[258263]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 06:48:45 compute-0 sudo[258263]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:48:45 compute-0 sudo[258263]: pam_unix(sudo:session): session closed for user root
Nov 29 06:48:45 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 06:48:45 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:48:45 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 06:48:45 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:48:46 compute-0 ceph-mon[74654]: pgmap v1089: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:48:46 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:48:46 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:48:46 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:48:46 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:48:46 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:48:46 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:48:46 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:48:46.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:48:46 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:48:46 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:48:46 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:48:46.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:48:46 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1090: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:48:47 compute-0 podman[258321]: 2025-11-29 06:48:47.152742507 +0000 UTC m=+0.108422869 container health_status 81ea2bcb89266a0110a379c2083d8cc042460d4a35c8ed3bf349dd1083925000 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent)
Nov 29 06:48:47 compute-0 podman[258322]: 2025-11-29 06:48:47.184285545 +0000 UTC m=+0.131461320 container health_status b3f42e9a710907b47913576d27471d163da731262c1464357cff24681ce600c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Nov 29 06:48:47 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:48:47 compute-0 ceph-mon[74654]: pgmap v1090: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:48:48 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:48:48 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:48:48 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:48:48.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:48:48 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:48:48 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:48:48 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:48:48.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:48:48 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1091: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:48:49 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 06:48:49 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:48:49 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 06:48:49 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:48:49 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 06:48:49 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:48:49 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 06:48:49 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 06:48:49 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 06:48:50 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:48:50 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev e83f8407-e815-446f-bb63-4f55fa7fa9a2 does not exist
Nov 29 06:48:50 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev f6d336ef-eff9-4688-9010-6a487937a273 does not exist
Nov 29 06:48:50 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev 38e04522-70ed-4a3a-9b0b-df93e09605fa does not exist
Nov 29 06:48:50 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 06:48:50 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 06:48:50 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 06:48:50 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 06:48:50 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 06:48:50 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:48:50 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:48:50 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:48:50 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:48:50.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:48:50 compute-0 sudo[258363]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:48:50 compute-0 sudo[258363]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:48:50 compute-0 sudo[258363]: pam_unix(sudo:session): session closed for user root
Nov 29 06:48:50 compute-0 sudo[258388]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:48:50 compute-0 sudo[258388]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:48:50 compute-0 sudo[258388]: pam_unix(sudo:session): session closed for user root
Nov 29 06:48:50 compute-0 ceph-mon[74654]: pgmap v1091: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:48:50 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:48:50 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:48:50 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:48:50 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 06:48:50 compute-0 sudo[258413]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:48:50 compute-0 sudo[258413]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:48:50 compute-0 sudo[258413]: pam_unix(sudo:session): session closed for user root
Nov 29 06:48:50 compute-0 sudo[258438]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Nov 29 06:48:50 compute-0 sudo[258438]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:48:50 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:48:50 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:48:50 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:48:50.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:48:50 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1092: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:48:50 compute-0 nova_compute[251877]: 2025-11-29 06:48:50.976 251881 WARNING oslo.service.loopingcall [-] Function 'nova.servicegroup.drivers.db.DbDriver._report_state' run outlasted interval by 20.62 sec
Nov 29 06:48:51 compute-0 podman[258505]: 2025-11-29 06:48:51.042442875 +0000 UTC m=+0.127681945 container create dcb57b1d498da49524739a1abd0b9fef120067a655e7de412a917aa08c5ff845 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_brattain, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 06:48:51 compute-0 podman[258505]: 2025-11-29 06:48:50.959774594 +0000 UTC m=+0.045013724 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:48:51 compute-0 nova_compute[251877]: 2025-11-29 06:48:51.092 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 06:48:51 compute-0 systemd[1]: Started libpod-conmon-dcb57b1d498da49524739a1abd0b9fef120067a655e7de412a917aa08c5ff845.scope.
Nov 29 06:48:51 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:48:51 compute-0 podman[258505]: 2025-11-29 06:48:51.341114876 +0000 UTC m=+0.426353966 container init dcb57b1d498da49524739a1abd0b9fef120067a655e7de412a917aa08c5ff845 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_brattain, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 06:48:51 compute-0 podman[258505]: 2025-11-29 06:48:51.355052244 +0000 UTC m=+0.440291304 container start dcb57b1d498da49524739a1abd0b9fef120067a655e7de412a917aa08c5ff845 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_brattain, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 06:48:51 compute-0 podman[258505]: 2025-11-29 06:48:51.363404207 +0000 UTC m=+0.448643277 container attach dcb57b1d498da49524739a1abd0b9fef120067a655e7de412a917aa08c5ff845 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_brattain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 29 06:48:51 compute-0 sweet_brattain[258523]: 167 167
Nov 29 06:48:51 compute-0 systemd[1]: libpod-dcb57b1d498da49524739a1abd0b9fef120067a655e7de412a917aa08c5ff845.scope: Deactivated successfully.
Nov 29 06:48:51 compute-0 podman[258505]: 2025-11-29 06:48:51.365657949 +0000 UTC m=+0.450897019 container died dcb57b1d498da49524739a1abd0b9fef120067a655e7de412a917aa08c5ff845 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_brattain, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 06:48:51 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:48:51 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 06:48:51 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 06:48:51 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:48:51 compute-0 ceph-mon[74654]: pgmap v1092: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:48:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-d88ddd02126c24bce52bf816074b4a7a73eeb245481dac60f40f87087ef9d0d6-merged.mount: Deactivated successfully.
Nov 29 06:48:51 compute-0 podman[258505]: 2025-11-29 06:48:51.808309778 +0000 UTC m=+0.893548848 container remove dcb57b1d498da49524739a1abd0b9fef120067a655e7de412a917aa08c5ff845 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_brattain, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 06:48:51 compute-0 sshd-session[258539]: Invalid user ghost from 162.214.92.14 port 48068
Nov 29 06:48:51 compute-0 systemd[1]: libpod-conmon-dcb57b1d498da49524739a1abd0b9fef120067a655e7de412a917aa08c5ff845.scope: Deactivated successfully.
Nov 29 06:48:51 compute-0 sshd-session[258539]: Received disconnect from 162.214.92.14 port 48068:11: Bye Bye [preauth]
Nov 29 06:48:51 compute-0 sshd-session[258539]: Disconnected from invalid user ghost 162.214.92.14 port 48068 [preauth]
Nov 29 06:48:52 compute-0 podman[258549]: 2025-11-29 06:48:52.051486355 +0000 UTC m=+0.052003388 container create a53d96fac660ae1844a06b68bf96cadb5422d9f8fec7d4281a51f66eab66c656 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_archimedes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 06:48:52 compute-0 systemd[1]: Started libpod-conmon-a53d96fac660ae1844a06b68bf96cadb5422d9f8fec7d4281a51f66eab66c656.scope.
Nov 29 06:48:52 compute-0 podman[258549]: 2025-11-29 06:48:52.024860124 +0000 UTC m=+0.025377257 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:48:52 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:48:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23519ff070b5ee8841b096f7221962f4b1fe8ea6169012fb92494e6b4e2eb732/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 06:48:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23519ff070b5ee8841b096f7221962f4b1fe8ea6169012fb92494e6b4e2eb732/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:48:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23519ff070b5ee8841b096f7221962f4b1fe8ea6169012fb92494e6b4e2eb732/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:48:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23519ff070b5ee8841b096f7221962f4b1fe8ea6169012fb92494e6b4e2eb732/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 06:48:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23519ff070b5ee8841b096f7221962f4b1fe8ea6169012fb92494e6b4e2eb732/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 06:48:52 compute-0 podman[258549]: 2025-11-29 06:48:52.168481631 +0000 UTC m=+0.168998694 container init a53d96fac660ae1844a06b68bf96cadb5422d9f8fec7d4281a51f66eab66c656 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_archimedes, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 29 06:48:52 compute-0 podman[258549]: 2025-11-29 06:48:52.181183905 +0000 UTC m=+0.181700938 container start a53d96fac660ae1844a06b68bf96cadb5422d9f8fec7d4281a51f66eab66c656 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_archimedes, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 06:48:52 compute-0 podman[258549]: 2025-11-29 06:48:52.192929522 +0000 UTC m=+0.193446575 container attach a53d96fac660ae1844a06b68bf96cadb5422d9f8fec7d4281a51f66eab66c656 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_archimedes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 06:48:52 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:48:52 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:48:52 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:48:52 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:48:52.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:48:52 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:48:52 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 06:48:52 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:48:52.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 06:48:52 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1093: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:48:53 compute-0 musing_archimedes[258566]: --> passed data devices: 0 physical, 1 LVM
Nov 29 06:48:53 compute-0 musing_archimedes[258566]: --> relative data size: 1.0
Nov 29 06:48:53 compute-0 musing_archimedes[258566]: --> All data devices are unavailable
Nov 29 06:48:53 compute-0 ceph-mgr[74948]: [devicehealth INFO root] Check health
Nov 29 06:48:53 compute-0 systemd[1]: libpod-a53d96fac660ae1844a06b68bf96cadb5422d9f8fec7d4281a51f66eab66c656.scope: Deactivated successfully.
Nov 29 06:48:53 compute-0 podman[258549]: 2025-11-29 06:48:53.066628026 +0000 UTC m=+1.067145119 container died a53d96fac660ae1844a06b68bf96cadb5422d9f8fec7d4281a51f66eab66c656 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_archimedes, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 29 06:48:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-23519ff070b5ee8841b096f7221962f4b1fe8ea6169012fb92494e6b4e2eb732-merged.mount: Deactivated successfully.
Nov 29 06:48:53 compute-0 podman[258549]: 2025-11-29 06:48:53.17816519 +0000 UTC m=+1.178682263 container remove a53d96fac660ae1844a06b68bf96cadb5422d9f8fec7d4281a51f66eab66c656 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_archimedes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 06:48:53 compute-0 systemd[1]: libpod-conmon-a53d96fac660ae1844a06b68bf96cadb5422d9f8fec7d4281a51f66eab66c656.scope: Deactivated successfully.
Nov 29 06:48:53 compute-0 sudo[258438]: pam_unix(sudo:session): session closed for user root
Nov 29 06:48:53 compute-0 sudo[258599]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:48:53 compute-0 sudo[258599]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:48:53 compute-0 sudo[258599]: pam_unix(sudo:session): session closed for user root
Nov 29 06:48:53 compute-0 sudo[258624]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:48:53 compute-0 sudo[258624]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:48:53 compute-0 sudo[258624]: pam_unix(sudo:session): session closed for user root
Nov 29 06:48:53 compute-0 sudo[258649]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:48:53 compute-0 sudo[258649]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:48:53 compute-0 sudo[258649]: pam_unix(sudo:session): session closed for user root
Nov 29 06:48:53 compute-0 sudo[258674]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -- lvm list --format json
Nov 29 06:48:53 compute-0 sudo[258674]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:48:54 compute-0 podman[258739]: 2025-11-29 06:48:54.025953044 +0000 UTC m=+0.051588147 container create 16532a9922f9d6504c359bad0476d4283934fecfefa9711e95a5c737a4d28eb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_allen, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 06:48:54 compute-0 ceph-mon[74654]: pgmap v1093: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:48:54 compute-0 systemd[1]: Started libpod-conmon-16532a9922f9d6504c359bad0476d4283934fecfefa9711e95a5c737a4d28eb5.scope.
Nov 29 06:48:54 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:48:54 compute-0 podman[258739]: 2025-11-29 06:48:54.002060719 +0000 UTC m=+0.027695852 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:48:54 compute-0 podman[258739]: 2025-11-29 06:48:54.111496814 +0000 UTC m=+0.137131957 container init 16532a9922f9d6504c359bad0476d4283934fecfefa9711e95a5c737a4d28eb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_allen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 06:48:54 compute-0 podman[258739]: 2025-11-29 06:48:54.119296822 +0000 UTC m=+0.144931925 container start 16532a9922f9d6504c359bad0476d4283934fecfefa9711e95a5c737a4d28eb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_allen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 29 06:48:54 compute-0 fervent_allen[258755]: 167 167
Nov 29 06:48:54 compute-0 systemd[1]: libpod-16532a9922f9d6504c359bad0476d4283934fecfefa9711e95a5c737a4d28eb5.scope: Deactivated successfully.
Nov 29 06:48:54 compute-0 conmon[258755]: conmon 16532a9922f9d6504c35 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-16532a9922f9d6504c359bad0476d4283934fecfefa9711e95a5c737a4d28eb5.scope/container/memory.events
Nov 29 06:48:54 compute-0 podman[258739]: 2025-11-29 06:48:54.129402543 +0000 UTC m=+0.155037706 container attach 16532a9922f9d6504c359bad0476d4283934fecfefa9711e95a5c737a4d28eb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_allen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 06:48:54 compute-0 podman[258739]: 2025-11-29 06:48:54.130297638 +0000 UTC m=+0.155932751 container died 16532a9922f9d6504c359bad0476d4283934fecfefa9711e95a5c737a4d28eb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_allen, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Nov 29 06:48:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-3d932d24ecd7c498135bee4c8885b58a8c4ceba49c219e2a1ed32429d86f27fa-merged.mount: Deactivated successfully.
Nov 29 06:48:54 compute-0 podman[258739]: 2025-11-29 06:48:54.229645443 +0000 UTC m=+0.255280586 container remove 16532a9922f9d6504c359bad0476d4283934fecfefa9711e95a5c737a4d28eb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_allen, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 29 06:48:54 compute-0 systemd[1]: libpod-conmon-16532a9922f9d6504c359bad0476d4283934fecfefa9711e95a5c737a4d28eb5.scope: Deactivated successfully.
Nov 29 06:48:54 compute-0 ceph-mgr[74948]: [balancer INFO root] Optimize plan auto_2025-11-29_06:48:54
Nov 29 06:48:54 compute-0 ceph-mgr[74948]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 06:48:54 compute-0 ceph-mgr[74948]: [balancer INFO root] do_upmap
Nov 29 06:48:54 compute-0 ceph-mgr[74948]: [balancer INFO root] pools ['backups', '.rgw.root', 'default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.meta', 'default.rgw.control', 'vms', 'images', 'volumes', '.mgr', 'cephfs.cephfs.meta']
Nov 29 06:48:54 compute-0 ceph-mgr[74948]: [balancer INFO root] prepared 0/10 changes
Nov 29 06:48:54 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:48:54 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:48:54 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:48:54.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:48:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:48:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:48:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:48:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:48:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:48:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:48:54 compute-0 sshd-session[258597]: Invalid user develop from 118.193.39.127 port 38932
Nov 29 06:48:54 compute-0 podman[258781]: 2025-11-29 06:48:54.436470047 +0000 UTC m=+0.074815922 container create 31328f7a8b414102c9c3babfc7be0bce8e3464ef2a0d5f68eb75fcfa9a4b6074 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_benz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 29 06:48:54 compute-0 podman[258781]: 2025-11-29 06:48:54.40778311 +0000 UTC m=+0.046129065 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:48:54 compute-0 systemd[1]: Started libpod-conmon-31328f7a8b414102c9c3babfc7be0bce8e3464ef2a0d5f68eb75fcfa9a4b6074.scope.
Nov 29 06:48:54 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:48:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4b875c59043919c16b7d4722be392eed8e18c419359c3c4ba20cfa7c27151a1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 06:48:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4b875c59043919c16b7d4722be392eed8e18c419359c3c4ba20cfa7c27151a1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:48:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4b875c59043919c16b7d4722be392eed8e18c419359c3c4ba20cfa7c27151a1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:48:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4b875c59043919c16b7d4722be392eed8e18c419359c3c4ba20cfa7c27151a1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 06:48:54 compute-0 podman[258781]: 2025-11-29 06:48:54.582502371 +0000 UTC m=+0.220848256 container init 31328f7a8b414102c9c3babfc7be0bce8e3464ef2a0d5f68eb75fcfa9a4b6074 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_benz, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 06:48:54 compute-0 podman[258781]: 2025-11-29 06:48:54.590122303 +0000 UTC m=+0.228468168 container start 31328f7a8b414102c9c3babfc7be0bce8e3464ef2a0d5f68eb75fcfa9a4b6074 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_benz, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 06:48:54 compute-0 sshd-session[258597]: Received disconnect from 118.193.39.127 port 38932:11: Bye Bye [preauth]
Nov 29 06:48:54 compute-0 sshd-session[258597]: Disconnected from invalid user develop 118.193.39.127 port 38932 [preauth]
Nov 29 06:48:54 compute-0 podman[258781]: 2025-11-29 06:48:54.595175044 +0000 UTC m=+0.233520899 container attach 31328f7a8b414102c9c3babfc7be0bce8e3464ef2a0d5f68eb75fcfa9a4b6074 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_benz, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True)
Nov 29 06:48:54 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:48:54 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:48:54 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:48:54.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:48:54 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1094: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:48:55 compute-0 sudo[258802]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:48:55 compute-0 sudo[258802]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:48:55 compute-0 sudo[258802]: pam_unix(sudo:session): session closed for user root
Nov 29 06:48:55 compute-0 sudo[258828]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:48:55 compute-0 sudo[258828]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:48:55 compute-0 sudo[258828]: pam_unix(sudo:session): session closed for user root
Nov 29 06:48:55 compute-0 exciting_benz[258797]: {
Nov 29 06:48:55 compute-0 exciting_benz[258797]:     "1": [
Nov 29 06:48:55 compute-0 exciting_benz[258797]:         {
Nov 29 06:48:55 compute-0 exciting_benz[258797]:             "devices": [
Nov 29 06:48:55 compute-0 exciting_benz[258797]:                 "/dev/loop3"
Nov 29 06:48:55 compute-0 exciting_benz[258797]:             ],
Nov 29 06:48:55 compute-0 exciting_benz[258797]:             "lv_name": "ceph_lv0",
Nov 29 06:48:55 compute-0 exciting_benz[258797]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 06:48:55 compute-0 exciting_benz[258797]:             "lv_size": "7511998464",
Nov 29 06:48:55 compute-0 exciting_benz[258797]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=336ec58c-893b-528f-a0c1-6ed1196bc047,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=91f280f1-e534-4adc-bf70-98711580c2dd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 06:48:55 compute-0 exciting_benz[258797]:             "lv_uuid": "G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP",
Nov 29 06:48:55 compute-0 exciting_benz[258797]:             "name": "ceph_lv0",
Nov 29 06:48:55 compute-0 exciting_benz[258797]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 06:48:55 compute-0 exciting_benz[258797]:             "tags": {
Nov 29 06:48:55 compute-0 exciting_benz[258797]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 06:48:55 compute-0 exciting_benz[258797]:                 "ceph.block_uuid": "G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP",
Nov 29 06:48:55 compute-0 exciting_benz[258797]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 06:48:55 compute-0 exciting_benz[258797]:                 "ceph.cluster_fsid": "336ec58c-893b-528f-a0c1-6ed1196bc047",
Nov 29 06:48:55 compute-0 exciting_benz[258797]:                 "ceph.cluster_name": "ceph",
Nov 29 06:48:55 compute-0 exciting_benz[258797]:                 "ceph.crush_device_class": "",
Nov 29 06:48:55 compute-0 exciting_benz[258797]:                 "ceph.encrypted": "0",
Nov 29 06:48:55 compute-0 exciting_benz[258797]:                 "ceph.osd_fsid": "91f280f1-e534-4adc-bf70-98711580c2dd",
Nov 29 06:48:55 compute-0 exciting_benz[258797]:                 "ceph.osd_id": "1",
Nov 29 06:48:55 compute-0 exciting_benz[258797]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 06:48:55 compute-0 exciting_benz[258797]:                 "ceph.type": "block",
Nov 29 06:48:55 compute-0 exciting_benz[258797]:                 "ceph.vdo": "0"
Nov 29 06:48:55 compute-0 exciting_benz[258797]:             },
Nov 29 06:48:55 compute-0 exciting_benz[258797]:             "type": "block",
Nov 29 06:48:55 compute-0 exciting_benz[258797]:             "vg_name": "ceph_vg0"
Nov 29 06:48:55 compute-0 exciting_benz[258797]:         }
Nov 29 06:48:55 compute-0 exciting_benz[258797]:     ]
Nov 29 06:48:55 compute-0 exciting_benz[258797]: }
Nov 29 06:48:55 compute-0 systemd[1]: libpod-31328f7a8b414102c9c3babfc7be0bce8e3464ef2a0d5f68eb75fcfa9a4b6074.scope: Deactivated successfully.
Nov 29 06:48:55 compute-0 podman[258857]: 2025-11-29 06:48:55.376350834 +0000 UTC m=+0.031781596 container died 31328f7a8b414102c9c3babfc7be0bce8e3464ef2a0d5f68eb75fcfa9a4b6074 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_benz, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 29 06:48:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-e4b875c59043919c16b7d4722be392eed8e18c419359c3c4ba20cfa7c27151a1-merged.mount: Deactivated successfully.
Nov 29 06:48:56 compute-0 podman[258857]: 2025-11-29 06:48:56.187774465 +0000 UTC m=+0.843205267 container remove 31328f7a8b414102c9c3babfc7be0bce8e3464ef2a0d5f68eb75fcfa9a4b6074 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_benz, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 29 06:48:56 compute-0 systemd[1]: libpod-conmon-31328f7a8b414102c9c3babfc7be0bce8e3464ef2a0d5f68eb75fcfa9a4b6074.scope: Deactivated successfully.
Nov 29 06:48:56 compute-0 sudo[258674]: pam_unix(sudo:session): session closed for user root
Nov 29 06:48:56 compute-0 ceph-mon[74654]: pgmap v1094: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:48:56 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:48:56 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:48:56 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:48:56.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:48:56 compute-0 sudo[258872]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:48:56 compute-0 sudo[258872]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:48:56 compute-0 sudo[258872]: pam_unix(sudo:session): session closed for user root
Nov 29 06:48:56 compute-0 sudo[258897]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:48:56 compute-0 sudo[258897]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:48:56 compute-0 sudo[258897]: pam_unix(sudo:session): session closed for user root
Nov 29 06:48:56 compute-0 sudo[258922]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:48:56 compute-0 sudo[258922]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:48:56 compute-0 sudo[258922]: pam_unix(sudo:session): session closed for user root
Nov 29 06:48:56 compute-0 sudo[258947]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -- raw list --format json
Nov 29 06:48:56 compute-0 sudo[258947]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:48:56 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:48:56 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:48:56 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:48:56.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:48:56 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1095: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:48:56 compute-0 podman[259012]: 2025-11-29 06:48:56.979470188 +0000 UTC m=+0.060537306 container create d4503fe63abccb54b637bc743b1a92e184a2195bebd6d0dcf6ecad18428b79b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_pare, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 06:48:57 compute-0 systemd[1]: Started libpod-conmon-d4503fe63abccb54b637bc743b1a92e184a2195bebd6d0dcf6ecad18428b79b6.scope.
Nov 29 06:48:57 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:48:57 compute-0 podman[259012]: 2025-11-29 06:48:56.962009412 +0000 UTC m=+0.043076560 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:48:57 compute-0 podman[259012]: 2025-11-29 06:48:57.063339872 +0000 UTC m=+0.144407080 container init d4503fe63abccb54b637bc743b1a92e184a2195bebd6d0dcf6ecad18428b79b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_pare, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 29 06:48:57 compute-0 podman[259012]: 2025-11-29 06:48:57.072198568 +0000 UTC m=+0.153265676 container start d4503fe63abccb54b637bc743b1a92e184a2195bebd6d0dcf6ecad18428b79b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_pare, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 29 06:48:57 compute-0 podman[259012]: 2025-11-29 06:48:57.075739457 +0000 UTC m=+0.156806615 container attach d4503fe63abccb54b637bc743b1a92e184a2195bebd6d0dcf6ecad18428b79b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_pare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 06:48:57 compute-0 frosty_pare[259028]: 167 167
Nov 29 06:48:57 compute-0 systemd[1]: libpod-d4503fe63abccb54b637bc743b1a92e184a2195bebd6d0dcf6ecad18428b79b6.scope: Deactivated successfully.
Nov 29 06:48:57 compute-0 podman[259012]: 2025-11-29 06:48:57.078773831 +0000 UTC m=+0.159840949 container died d4503fe63abccb54b637bc743b1a92e184a2195bebd6d0dcf6ecad18428b79b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_pare, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 06:48:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-5cde43ffdbef4d88360d42698e3934f180209c2bbca46d1474ad175f165c9ffa-merged.mount: Deactivated successfully.
Nov 29 06:48:57 compute-0 podman[259012]: 2025-11-29 06:48:57.128285539 +0000 UTC m=+0.209352687 container remove d4503fe63abccb54b637bc743b1a92e184a2195bebd6d0dcf6ecad18428b79b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_pare, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef)
Nov 29 06:48:57 compute-0 systemd[1]: libpod-conmon-d4503fe63abccb54b637bc743b1a92e184a2195bebd6d0dcf6ecad18428b79b6.scope: Deactivated successfully.
Nov 29 06:48:57 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:48:57 compute-0 podman[259053]: 2025-11-29 06:48:57.361399597 +0000 UTC m=+0.065690200 container create bd4cb8bb471b8dd896b1297799632e3aa2754647221d20aae58bda7f0b81eeb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_chaplygin, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 29 06:48:57 compute-0 systemd[1]: Started libpod-conmon-bd4cb8bb471b8dd896b1297799632e3aa2754647221d20aae58bda7f0b81eeb1.scope.
Nov 29 06:48:57 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:48:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aac338160a1fc434e6cf6ff3a778913d5d4485bbe5b6e64f18820cc36137bf3f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 06:48:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aac338160a1fc434e6cf6ff3a778913d5d4485bbe5b6e64f18820cc36137bf3f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:48:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aac338160a1fc434e6cf6ff3a778913d5d4485bbe5b6e64f18820cc36137bf3f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:48:57 compute-0 podman[259053]: 2025-11-29 06:48:57.336172825 +0000 UTC m=+0.040463428 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:48:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aac338160a1fc434e6cf6ff3a778913d5d4485bbe5b6e64f18820cc36137bf3f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 06:48:57 compute-0 ceph-mon[74654]: pgmap v1095: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:48:57 compute-0 podman[259053]: 2025-11-29 06:48:57.456294677 +0000 UTC m=+0.160585280 container init bd4cb8bb471b8dd896b1297799632e3aa2754647221d20aae58bda7f0b81eeb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_chaplygin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 29 06:48:57 compute-0 podman[259053]: 2025-11-29 06:48:57.468286901 +0000 UTC m=+0.172577474 container start bd4cb8bb471b8dd896b1297799632e3aa2754647221d20aae58bda7f0b81eeb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_chaplygin, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Nov 29 06:48:57 compute-0 podman[259053]: 2025-11-29 06:48:57.474149594 +0000 UTC m=+0.178440267 container attach bd4cb8bb471b8dd896b1297799632e3aa2754647221d20aae58bda7f0b81eeb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_chaplygin, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 29 06:48:58 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:48:58 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:48:58 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:48:58.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:48:58 compute-0 jolly_chaplygin[259069]: {
Nov 29 06:48:58 compute-0 jolly_chaplygin[259069]:     "91f280f1-e534-4adc-bf70-98711580c2dd": {
Nov 29 06:48:58 compute-0 jolly_chaplygin[259069]:         "ceph_fsid": "336ec58c-893b-528f-a0c1-6ed1196bc047",
Nov 29 06:48:58 compute-0 jolly_chaplygin[259069]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 06:48:58 compute-0 jolly_chaplygin[259069]:         "osd_id": 1,
Nov 29 06:48:58 compute-0 jolly_chaplygin[259069]:         "osd_uuid": "91f280f1-e534-4adc-bf70-98711580c2dd",
Nov 29 06:48:58 compute-0 jolly_chaplygin[259069]:         "type": "bluestore"
Nov 29 06:48:58 compute-0 jolly_chaplygin[259069]:     }
Nov 29 06:48:58 compute-0 jolly_chaplygin[259069]: }
Nov 29 06:48:58 compute-0 systemd[1]: libpod-bd4cb8bb471b8dd896b1297799632e3aa2754647221d20aae58bda7f0b81eeb1.scope: Deactivated successfully.
Nov 29 06:48:58 compute-0 podman[259053]: 2025-11-29 06:48:58.370018265 +0000 UTC m=+1.074308898 container died bd4cb8bb471b8dd896b1297799632e3aa2754647221d20aae58bda7f0b81eeb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_chaplygin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 29 06:48:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-aac338160a1fc434e6cf6ff3a778913d5d4485bbe5b6e64f18820cc36137bf3f-merged.mount: Deactivated successfully.
Nov 29 06:48:58 compute-0 podman[259053]: 2025-11-29 06:48:58.434424547 +0000 UTC m=+1.138715110 container remove bd4cb8bb471b8dd896b1297799632e3aa2754647221d20aae58bda7f0b81eeb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_chaplygin, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 29 06:48:58 compute-0 systemd[1]: libpod-conmon-bd4cb8bb471b8dd896b1297799632e3aa2754647221d20aae58bda7f0b81eeb1.scope: Deactivated successfully.
Nov 29 06:48:58 compute-0 sudo[258947]: pam_unix(sudo:session): session closed for user root
Nov 29 06:48:58 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 06:48:58 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:48:58 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 06:48:58 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:48:58 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev f962f9a4-6523-4bf5-b821-10feb7e4d907 does not exist
Nov 29 06:48:58 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev 442fb065-95d6-436c-859a-cc8110335ca4 does not exist
Nov 29 06:48:58 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev a3f1ca67-ecb1-42f4-8c7f-f0adb95cfd82 does not exist
Nov 29 06:48:58 compute-0 sudo[259106]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:48:58 compute-0 sudo[259106]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:48:58 compute-0 sudo[259106]: pam_unix(sudo:session): session closed for user root
Nov 29 06:48:58 compute-0 sudo[259131]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 06:48:58 compute-0 sudo[259131]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:48:58 compute-0 sudo[259131]: pam_unix(sudo:session): session closed for user root
Nov 29 06:48:58 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:48:58 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:48:58 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:48:58.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:48:58 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1096: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:48:59 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:48:59 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:48:59 compute-0 ceph-mon[74654]: pgmap v1096: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:49:00 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:49:00 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:49:00 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:49:00.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:49:00 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:49:00 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:49:00 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:49:00.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:49:00 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1097: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:49:01 compute-0 ceph-mon[74654]: pgmap v1097: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:49:02 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:49:02 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:49:02 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:49:02 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:49:02.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:49:02 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:49:02 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:49:02 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:49:02.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:49:02 compute-0 ceph-mon[74654]: from='client.? 192.168.122.10:0/4223701543' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 06:49:02 compute-0 ceph-mon[74654]: from='client.? 192.168.122.10:0/4223701543' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 06:49:02 compute-0 nova_compute[251877]: 2025-11-29 06:49:02.827 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 06:49:02 compute-0 nova_compute[251877]: 2025-11-29 06:49:02.828 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 06:49:02 compute-0 nova_compute[251877]: 2025-11-29 06:49:02.829 251881 DEBUG nova.compute.manager [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 06:49:02 compute-0 nova_compute[251877]: 2025-11-29 06:49:02.829 251881 DEBUG nova.compute.manager [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 06:49:02 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1098: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:49:03 compute-0 ceph-mon[74654]: pgmap v1098: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:49:04 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:49:04 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:49:04 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:49:04.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:49:04 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:49:04 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:49:04 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:49:04.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:49:04 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1099: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:49:06 compute-0 ceph-mon[74654]: pgmap v1099: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:49:06 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:49:06 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:49:06 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:49:06.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:49:06 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:49:06 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:49:06 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:49:06.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:49:06 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1100: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:49:07 compute-0 podman[259162]: 2025-11-29 06:49:07.106388409 +0000 UTC m=+0.071036408 container health_status 843911ed0b6203707f0633a7e737420fbf54d55170a2d9cdc86db1752ff76af8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd)
Nov 29 06:49:07 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:49:08 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:49:08 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:49:08 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:49:08.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:49:08 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:49:08 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:49:08 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:49:08.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:49:08 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1101: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:49:09 compute-0 ceph-mon[74654]: pgmap v1100: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:49:10 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:49:10 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:49:10 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:49:10.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:49:10 compute-0 ceph-mon[74654]: pgmap v1101: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:49:10 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:49:10 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:49:10 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:49:10.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:49:10 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1102: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:49:11 compute-0 ceph-mon[74654]: pgmap v1102: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:49:12 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:49:12 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:49:12 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:49:12 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:49:12.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:49:12 compute-0 sshd-session[259160]: Invalid user frontend from 101.47.163.116 port 54056
Nov 29 06:49:12 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:49:12 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:49:12 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:49:12.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:49:12 compute-0 sshd-session[259160]: Received disconnect from 101.47.163.116 port 54056:11: Bye Bye [preauth]
Nov 29 06:49:12 compute-0 sshd-session[259160]: Disconnected from invalid user frontend 101.47.163.116 port 54056 [preauth]
Nov 29 06:49:12 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1103: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:49:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 06:49:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:49:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 06:49:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:49:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:49:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:49:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:49:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:49:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:49:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:49:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:49:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:49:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 06:49:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:49:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:49:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:49:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Nov 29 06:49:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:49:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 06:49:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:49:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:49:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:49:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 06:49:14 compute-0 ceph-mon[74654]: pgmap v1103: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:49:14 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:49:14 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:49:14 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:49:14.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:49:14 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:49:14 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:49:14 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:49:14.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:49:14 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1104: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:49:15 compute-0 sudo[259187]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:49:15 compute-0 sudo[259187]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:49:15 compute-0 sudo[259187]: pam_unix(sudo:session): session closed for user root
Nov 29 06:49:15 compute-0 sudo[259212]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:49:15 compute-0 sudo[259212]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:49:15 compute-0 sudo[259212]: pam_unix(sudo:session): session closed for user root
Nov 29 06:49:15 compute-0 ceph-mon[74654]: pgmap v1104: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:49:16 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:49:16 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:49:16 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:49:16.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:49:16 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:49:16 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:49:16 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:49:16.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:49:16 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1105: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:49:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:49:17.239 157767 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 06:49:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:49:17.240 157767 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 06:49:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:49:17.240 157767 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 06:49:17 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:49:18 compute-0 ceph-mon[74654]: pgmap v1105: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:49:18 compute-0 sshd-session[259238]: Received disconnect from 193.46.255.244 port 17662:11:  [preauth]
Nov 29 06:49:18 compute-0 sshd-session[259238]: Disconnected from authenticating user root 193.46.255.244 port 17662 [preauth]
Nov 29 06:49:18 compute-0 podman[259240]: 2025-11-29 06:49:18.157596916 +0000 UTC m=+0.111135594 container health_status 81ea2bcb89266a0110a379c2083d8cc042460d4a35c8ed3bf349dd1083925000 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 06:49:18 compute-0 podman[259241]: 2025-11-29 06:49:18.168224582 +0000 UTC m=+0.122294675 container health_status b3f42e9a710907b47913576d27471d163da731262c1464357cff24681ce600c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 06:49:18 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:49:18 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:49:18 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:49:18.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:49:18 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:49:18 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:49:18 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:49:18.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:49:18 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1106: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:49:20 compute-0 ceph-mon[74654]: pgmap v1106: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:49:20 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:49:20 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 06:49:20 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:49:20.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 06:49:20 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:49:20 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:49:20 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:49:20.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:49:20 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1107: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:49:22 compute-0 ceph-mon[74654]: pgmap v1107: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:49:22 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:49:22 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:49:22 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:49:22.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:49:22 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:49:22 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:49:22 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:49:22 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:49:22.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:49:22 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1108: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:49:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:49:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:49:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:49:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:49:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:49:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:49:24 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:49:24 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:49:24 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:49:24.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:49:24 compute-0 ceph-mon[74654]: pgmap v1108: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:49:24 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:49:24 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:49:24 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:49:24.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:49:24 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1109: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:49:26 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:49:26 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:49:26 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:49:26.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:49:26 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:49:26 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.002000056s ======
Nov 29 06:49:26 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:49:26.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000056s
Nov 29 06:49:26 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1110: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:49:27 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:49:27 compute-0 ceph-mon[74654]: pgmap v1109: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:49:28 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:49:28 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:49:28 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:49:28.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:49:28 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:49:28 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:49:28 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:49:28.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:49:28 compute-0 ceph-mon[74654]: pgmap v1110: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:49:28 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1111: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:49:29 compute-0 sshd-session[259295]: Invalid user app from 176.109.67.96 port 42778
Nov 29 06:49:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 06:49:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 06:49:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 06:49:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 06:49:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 06:49:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 06:49:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 06:49:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 06:49:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 06:49:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 06:49:29 compute-0 sshd-session[259295]: Received disconnect from 176.109.67.96 port 42778:11: Bye Bye [preauth]
Nov 29 06:49:29 compute-0 sshd-session[259295]: Disconnected from invalid user app 176.109.67.96 port 42778 [preauth]
Nov 29 06:49:29 compute-0 ceph-mon[74654]: pgmap v1111: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:49:29 compute-0 sshd-session[259293]: Invalid user castle from 103.31.39.143 port 48936
Nov 29 06:49:30 compute-0 sshd-session[259293]: Received disconnect from 103.31.39.143 port 48936:11: Bye Bye [preauth]
Nov 29 06:49:30 compute-0 sshd-session[259293]: Disconnected from invalid user castle 103.31.39.143 port 48936 [preauth]
Nov 29 06:49:30 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:49:30 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:49:30 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:49:30.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:49:30 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:49:30 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 06:49:30 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:49:30.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 06:49:30 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1112: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:49:32 compute-0 ceph-mon[74654]: pgmap v1112: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:49:32 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:49:32 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:49:32 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:49:32.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:49:32 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:49:32 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:49:32 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:49:32 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:49:32.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:49:32 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1113: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:49:33 compute-0 sshd-session[259299]: Invalid user jose from 34.92.81.41 port 51936
Nov 29 06:49:33 compute-0 sshd-session[259299]: Received disconnect from 34.92.81.41 port 51936:11: Bye Bye [preauth]
Nov 29 06:49:33 compute-0 sshd-session[259299]: Disconnected from invalid user jose 34.92.81.41 port 51936 [preauth]
Nov 29 06:49:34 compute-0 nova_compute[251877]: 2025-11-29 06:49:34.163 251881 DEBUG nova.compute.manager [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 06:49:34 compute-0 nova_compute[251877]: 2025-11-29 06:49:34.165 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 06:49:34 compute-0 nova_compute[251877]: 2025-11-29 06:49:34.165 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 06:49:34 compute-0 nova_compute[251877]: 2025-11-29 06:49:34.165 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 06:49:34 compute-0 nova_compute[251877]: 2025-11-29 06:49:34.165 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 06:49:34 compute-0 nova_compute[251877]: 2025-11-29 06:49:34.165 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 06:49:34 compute-0 nova_compute[251877]: 2025-11-29 06:49:34.166 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 06:49:34 compute-0 nova_compute[251877]: 2025-11-29 06:49:34.166 251881 DEBUG nova.compute.manager [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 06:49:34 compute-0 nova_compute[251877]: 2025-11-29 06:49:34.166 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 06:49:34 compute-0 ceph-mon[74654]: pgmap v1113: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:49:34 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:49:34 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:49:34 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:49:34.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:49:34 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:49:34 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:49:34 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:49:34.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:49:34 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1114: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:49:35 compute-0 nova_compute[251877]: 2025-11-29 06:49:35.003 251881 WARNING oslo.service.loopingcall [-] Function 'nova.servicegroup.drivers.db.DbDriver._report_state' run outlasted interval by 24.02 sec
Nov 29 06:49:35 compute-0 ceph-mon[74654]: pgmap v1114: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:49:35 compute-0 sudo[259303]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:49:35 compute-0 sudo[259303]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:49:35 compute-0 sudo[259303]: pam_unix(sudo:session): session closed for user root
Nov 29 06:49:35 compute-0 sudo[259328]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:49:35 compute-0 sudo[259328]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:49:35 compute-0 sudo[259328]: pam_unix(sudo:session): session closed for user root
Nov 29 06:49:36 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:49:36 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:49:36 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:49:36.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:49:36 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:49:36 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:49:36 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:49:36.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:49:36 compute-0 sshd-session[259353]: Invalid user user5 from 49.247.35.31 port 1030
Nov 29 06:49:36 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1115: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:49:37 compute-0 sshd-session[259353]: Received disconnect from 49.247.35.31 port 1030:11: Bye Bye [preauth]
Nov 29 06:49:37 compute-0 sshd-session[259353]: Disconnected from invalid user user5 49.247.35.31 port 1030 [preauth]
Nov 29 06:49:37 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:49:38 compute-0 podman[259356]: 2025-11-29 06:49:38.15121844 +0000 UTC m=+0.100701023 container health_status 843911ed0b6203707f0633a7e737420fbf54d55170a2d9cdc86db1752ff76af8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 29 06:49:38 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:49:38 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:49:38 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:49:38.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:49:38 compute-0 ceph-mon[74654]: pgmap v1115: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:49:38 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:49:38 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:49:38 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:49:38.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:49:38 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1116: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:49:40 compute-0 ceph-mon[74654]: pgmap v1116: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:49:40 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:49:40 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:49:40 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:49:40.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:49:40 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:49:40 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:49:40 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:49:40.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:49:40 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1117: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:49:41 compute-0 sshd-session[259377]: Invalid user scanner from 197.13.24.157 port 35236
Nov 29 06:49:41 compute-0 sshd-session[259377]: Received disconnect from 197.13.24.157 port 35236:11: Bye Bye [preauth]
Nov 29 06:49:41 compute-0 sshd-session[259377]: Disconnected from invalid user scanner 197.13.24.157 port 35236 [preauth]
Nov 29 06:49:42 compute-0 ceph-mon[74654]: pgmap v1117: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:49:42 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:49:42 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:49:42 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:49:42.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:49:42 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:49:42 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:49:42 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:49:42 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:49:42.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:49:42 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1118: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:49:43 compute-0 ceph-mon[74654]: pgmap v1118: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:49:44 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:49:44 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:49:44 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:49:44.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:49:44 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:49:44 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:49:44 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:49:44.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:49:44 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1119: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:49:46 compute-0 ceph-mon[74654]: pgmap v1119: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:49:46 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:49:46 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:49:46 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:49:46.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:49:46 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:49:46 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:49:46 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:49:46.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:49:46 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1120: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:49:47 compute-0 ceph-mon[74654]: pgmap v1120: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:49:47 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:49:48 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:49:48 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:49:48 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:49:48.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:49:48 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:49:48 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:49:48 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:49:48.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:49:48 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1121: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:49:49 compute-0 podman[259383]: 2025-11-29 06:49:49.121536966 +0000 UTC m=+0.086897099 container health_status 81ea2bcb89266a0110a379c2083d8cc042460d4a35c8ed3bf349dd1083925000 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 06:49:49 compute-0 podman[259384]: 2025-11-29 06:49:49.176079914 +0000 UTC m=+0.133738493 container health_status b3f42e9a710907b47913576d27471d163da731262c1464357cff24681ce600c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 29 06:49:50 compute-0 ceph-mon[74654]: pgmap v1121: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:49:50 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:49:50 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:49:50 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:49:50.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:49:50 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:49:50 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 06:49:50 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:49:50.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 06:49:50 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1122: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:49:51 compute-0 nova_compute[251877]: 2025-11-29 06:49:51.121 251881 DEBUG oslo_concurrency.lockutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 06:49:51 compute-0 nova_compute[251877]: 2025-11-29 06:49:51.122 251881 DEBUG oslo_concurrency.lockutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 06:49:51 compute-0 nova_compute[251877]: 2025-11-29 06:49:51.122 251881 DEBUG oslo_concurrency.lockutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 06:49:51 compute-0 nova_compute[251877]: 2025-11-29 06:49:51.122 251881 DEBUG nova.compute.resource_tracker [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 06:49:51 compute-0 nova_compute[251877]: 2025-11-29 06:49:51.123 251881 DEBUG oslo_concurrency.processutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 06:49:51 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 06:49:51 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1336302150' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 06:49:51 compute-0 nova_compute[251877]: 2025-11-29 06:49:51.574 251881 DEBUG oslo_concurrency.processutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 06:49:51 compute-0 nova_compute[251877]: 2025-11-29 06:49:51.743 251881 WARNING nova.virt.libvirt.driver [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 06:49:51 compute-0 nova_compute[251877]: 2025-11-29 06:49:51.744 251881 DEBUG nova.compute.resource_tracker [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5203MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 06:49:51 compute-0 nova_compute[251877]: 2025-11-29 06:49:51.745 251881 DEBUG oslo_concurrency.lockutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 06:49:51 compute-0 nova_compute[251877]: 2025-11-29 06:49:51.745 251881 DEBUG oslo_concurrency.lockutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 06:49:52 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:49:52 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:49:52 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:49:52.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:49:52 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:49:52 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:49:52 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:49:52.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:49:52 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:49:52 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1123: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:49:53 compute-0 ceph-mon[74654]: pgmap v1122: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:49:53 compute-0 ceph-mon[74654]: from='client.? 192.168.122.102:0/2261730173' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 06:49:53 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 06:49:53 compute-0 ceph-mon[74654]: from='client.? 192.168.122.101:0/1398036893' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 06:49:53 compute-0 ceph-mon[74654]: from='client.? 192.168.122.100:0/1336302150' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 06:49:54 compute-0 sshd-session[259455]: Received disconnect from 103.143.238.173 port 39200:11: Bye Bye [preauth]
Nov 29 06:49:54 compute-0 sshd-session[259455]: Disconnected from authenticating user root 103.143.238.173 port 39200 [preauth]
Nov 29 06:49:54 compute-0 ceph-mgr[74948]: [balancer INFO root] Optimize plan auto_2025-11-29_06:49:54
Nov 29 06:49:54 compute-0 ceph-mgr[74948]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 06:49:54 compute-0 ceph-mgr[74948]: [balancer INFO root] do_upmap
Nov 29 06:49:54 compute-0 ceph-mgr[74948]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.data', 'vms', '.mgr', 'default.rgw.meta', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.log', 'images', '.rgw.root', 'backups']
Nov 29 06:49:54 compute-0 ceph-mgr[74948]: [balancer INFO root] prepared 0/10 changes
Nov 29 06:49:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:49:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:49:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:49:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:49:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:49:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:49:54 compute-0 ceph-mon[74654]: pgmap v1123: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:49:54 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:49:54 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:49:54 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:49:54.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:49:54 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:49:54 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:49:54 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:49:54.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:49:54 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1124: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:49:55 compute-0 sudo[259460]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:49:55 compute-0 sudo[259460]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:49:55 compute-0 sshd-session[259457]: Received disconnect from 103.63.25.115 port 46642:11: Bye Bye [preauth]
Nov 29 06:49:55 compute-0 sudo[259460]: pam_unix(sudo:session): session closed for user root
Nov 29 06:49:55 compute-0 sshd-session[259457]: Disconnected from authenticating user root 103.63.25.115 port 46642 [preauth]
Nov 29 06:49:55 compute-0 sudo[259485]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:49:55 compute-0 sudo[259485]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:49:55 compute-0 sudo[259485]: pam_unix(sudo:session): session closed for user root
Nov 29 06:49:56 compute-0 ceph-mon[74654]: pgmap v1124: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:49:56 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:49:56 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:49:56 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:49:56.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:49:56 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:49:56 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:49:56 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:49:56.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:49:56 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1125: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:49:57 compute-0 sshd-session[259510]: Invalid user ubuntu from 193.163.72.91 port 37020
Nov 29 06:49:57 compute-0 sshd-session[259510]: Received disconnect from 193.163.72.91 port 37020:11: Bye Bye [preauth]
Nov 29 06:49:57 compute-0 sshd-session[259510]: Disconnected from invalid user ubuntu 193.163.72.91 port 37020 [preauth]
Nov 29 06:49:57 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:49:58 compute-0 ceph-mon[74654]: pgmap v1125: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:49:58 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:49:58 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 06:49:58 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:49:58.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 06:49:58 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:49:58 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:49:58 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:49:58.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:49:58 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1126: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:49:59 compute-0 sshd-session[259513]: Received disconnect from 162.214.92.14 port 47240:11: Bye Bye [preauth]
Nov 29 06:49:59 compute-0 sshd-session[259513]: Disconnected from authenticating user root 162.214.92.14 port 47240 [preauth]
Nov 29 06:49:59 compute-0 sudo[259515]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:49:59 compute-0 sudo[259515]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:49:59 compute-0 sudo[259515]: pam_unix(sudo:session): session closed for user root
Nov 29 06:49:59 compute-0 sudo[259541]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:49:59 compute-0 sudo[259541]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:49:59 compute-0 sudo[259541]: pam_unix(sudo:session): session closed for user root
Nov 29 06:49:59 compute-0 sudo[259566]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:49:59 compute-0 sudo[259566]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:49:59 compute-0 sudo[259566]: pam_unix(sudo:session): session closed for user root
Nov 29 06:49:59 compute-0 sudo[259591]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Nov 29 06:49:59 compute-0 sudo[259591]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:49:59 compute-0 ceph-mon[74654]: pgmap v1126: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:49:59 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 06:49:59 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:49:59 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 06:49:59 compute-0 sudo[259591]: pam_unix(sudo:session): session closed for user root
Nov 29 06:49:59 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 06:49:59 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:49:59 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:49:59 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 06:49:59 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:49:59 compute-0 sudo[259634]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:49:59 compute-0 sudo[259634]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:49:59 compute-0 sudo[259634]: pam_unix(sudo:session): session closed for user root
Nov 29 06:49:59 compute-0 sudo[259659]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:49:59 compute-0 sudo[259659]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:49:59 compute-0 sudo[259659]: pam_unix(sudo:session): session closed for user root
Nov 29 06:49:59 compute-0 sudo[259684]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:49:59 compute-0 sudo[259684]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:49:59 compute-0 sudo[259684]: pam_unix(sudo:session): session closed for user root
Nov 29 06:49:59 compute-0 sudo[259709]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 06:49:59 compute-0 sudo[259709]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:50:00 compute-0 ceph-mon[74654]: log_channel(cluster) log [INF] : overall HEALTH_OK
Nov 29 06:50:00 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:50:00 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:50:00 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:50:00.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:50:00 compute-0 sudo[259709]: pam_unix(sudo:session): session closed for user root
Nov 29 06:50:00 compute-0 sudo[259767]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:50:00 compute-0 sudo[259767]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:50:00 compute-0 sudo[259767]: pam_unix(sudo:session): session closed for user root
Nov 29 06:50:00 compute-0 sudo[259792]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:50:00 compute-0 sudo[259792]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:50:00 compute-0 sudo[259792]: pam_unix(sudo:session): session closed for user root
Nov 29 06:50:00 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:50:00 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:50:00 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:50:00 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:50:00 compute-0 ceph-mon[74654]: overall HEALTH_OK
Nov 29 06:50:00 compute-0 sudo[259817]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:50:00 compute-0 sudo[259817]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:50:00 compute-0 sudo[259817]: pam_unix(sudo:session): session closed for user root
Nov 29 06:50:00 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:50:00 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:50:00 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:50:00.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:50:00 compute-0 sudo[259842]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -- inventory --format=json-pretty --filter-for-batch
Nov 29 06:50:00 compute-0 sudo[259842]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:50:00 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1127: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:50:01 compute-0 podman[259908]: 2025-11-29 06:50:01.254105835 +0000 UTC m=+0.018515706 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:50:01 compute-0 podman[259908]: 2025-11-29 06:50:01.591692911 +0000 UTC m=+0.356102792 container create 3bfe9fd9a7f7924df2f697bbef9ba4eb120fbfdde14cc5469eb0e9bdb2454e5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_meninsky, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 29 06:50:02 compute-0 systemd[1]: Started libpod-conmon-3bfe9fd9a7f7924df2f697bbef9ba4eb120fbfdde14cc5469eb0e9bdb2454e5c.scope.
Nov 29 06:50:02 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:50:02 compute-0 nova_compute[251877]: 2025-11-29 06:50:02.209 251881 WARNING oslo.service.loopingcall [-] Function 'nova.servicegroup.drivers.db.DbDriver._report_state' run outlasted interval by 17.21 sec
Nov 29 06:50:02 compute-0 ceph-mon[74654]: pgmap v1127: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:50:02 compute-0 podman[259908]: 2025-11-29 06:50:02.386313994 +0000 UTC m=+1.150723945 container init 3bfe9fd9a7f7924df2f697bbef9ba4eb120fbfdde14cc5469eb0e9bdb2454e5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_meninsky, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 06:50:02 compute-0 podman[259908]: 2025-11-29 06:50:02.402022591 +0000 UTC m=+1.166432482 container start 3bfe9fd9a7f7924df2f697bbef9ba4eb120fbfdde14cc5469eb0e9bdb2454e5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_meninsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 29 06:50:02 compute-0 systemd[1]: libpod-3bfe9fd9a7f7924df2f697bbef9ba4eb120fbfdde14cc5469eb0e9bdb2454e5c.scope: Deactivated successfully.
Nov 29 06:50:02 compute-0 compassionate_meninsky[259924]: 167 167
Nov 29 06:50:02 compute-0 conmon[259924]: conmon 3bfe9fd9a7f7924df2f6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3bfe9fd9a7f7924df2f697bbef9ba4eb120fbfdde14cc5469eb0e9bdb2454e5c.scope/container/memory.events
Nov 29 06:50:02 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:50:02 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:50:02 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:50:02.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:50:02 compute-0 podman[259908]: 2025-11-29 06:50:02.415961239 +0000 UTC m=+1.180371130 container attach 3bfe9fd9a7f7924df2f697bbef9ba4eb120fbfdde14cc5469eb0e9bdb2454e5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_meninsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 29 06:50:02 compute-0 podman[259908]: 2025-11-29 06:50:02.416476044 +0000 UTC m=+1.180885895 container died 3bfe9fd9a7f7924df2f697bbef9ba4eb120fbfdde14cc5469eb0e9bdb2454e5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_meninsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 06:50:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-b88b283df89e45305ff495b39de5ecab2febb386624824f3a0b6bf87dca9414c-merged.mount: Deactivated successfully.
Nov 29 06:50:02 compute-0 sshd-session[259877]: Invalid user ubuntu from 27.112.78.245 port 42672
Nov 29 06:50:02 compute-0 podman[259908]: 2025-11-29 06:50:02.465985281 +0000 UTC m=+1.230395122 container remove 3bfe9fd9a7f7924df2f697bbef9ba4eb120fbfdde14cc5469eb0e9bdb2454e5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_meninsky, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 06:50:02 compute-0 systemd[1]: libpod-conmon-3bfe9fd9a7f7924df2f697bbef9ba4eb120fbfdde14cc5469eb0e9bdb2454e5c.scope: Deactivated successfully.
Nov 29 06:50:02 compute-0 podman[259948]: 2025-11-29 06:50:02.627468455 +0000 UTC m=+0.040899430 container create 7ae069206aa30e87010361f0730759e8f253ad0bef4b4d9169c0d134ef7f39c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_varahamihira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 06:50:02 compute-0 systemd[1]: Started libpod-conmon-7ae069206aa30e87010361f0730759e8f253ad0bef4b4d9169c0d134ef7f39c9.scope.
Nov 29 06:50:02 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:50:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8afd11d7f70042ca1d8a4f56df57e03b639bcd4fe773a0d22bb64f607027be07/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 06:50:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8afd11d7f70042ca1d8a4f56df57e03b639bcd4fe773a0d22bb64f607027be07/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:50:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8afd11d7f70042ca1d8a4f56df57e03b639bcd4fe773a0d22bb64f607027be07/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:50:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8afd11d7f70042ca1d8a4f56df57e03b639bcd4fe773a0d22bb64f607027be07/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 06:50:02 compute-0 podman[259948]: 2025-11-29 06:50:02.688911854 +0000 UTC m=+0.102342849 container init 7ae069206aa30e87010361f0730759e8f253ad0bef4b4d9169c0d134ef7f39c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_varahamihira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 29 06:50:02 compute-0 podman[259948]: 2025-11-29 06:50:02.696956548 +0000 UTC m=+0.110387523 container start 7ae069206aa30e87010361f0730759e8f253ad0bef4b4d9169c0d134ef7f39c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_varahamihira, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 29 06:50:02 compute-0 podman[259948]: 2025-11-29 06:50:02.700794945 +0000 UTC m=+0.114225910 container attach 7ae069206aa30e87010361f0730759e8f253ad0bef4b4d9169c0d134ef7f39c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_varahamihira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 06:50:02 compute-0 podman[259948]: 2025-11-29 06:50:02.611724336 +0000 UTC m=+0.025155341 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:50:02 compute-0 sshd-session[259877]: Received disconnect from 27.112.78.245 port 42672:11: Bye Bye [preauth]
Nov 29 06:50:02 compute-0 sshd-session[259877]: Disconnected from invalid user ubuntu 27.112.78.245 port 42672 [preauth]
Nov 29 06:50:02 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:50:02 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:50:02 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:50:02.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:50:02 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:50:02 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1128: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:50:03 compute-0 elastic_varahamihira[259964]: [
Nov 29 06:50:03 compute-0 elastic_varahamihira[259964]:     {
Nov 29 06:50:03 compute-0 elastic_varahamihira[259964]:         "available": false,
Nov 29 06:50:03 compute-0 elastic_varahamihira[259964]:         "ceph_device": false,
Nov 29 06:50:03 compute-0 elastic_varahamihira[259964]:         "device_id": "QEMU_DVD-ROM_QM00001",
Nov 29 06:50:03 compute-0 elastic_varahamihira[259964]:         "lsm_data": {},
Nov 29 06:50:03 compute-0 elastic_varahamihira[259964]:         "lvs": [],
Nov 29 06:50:03 compute-0 elastic_varahamihira[259964]:         "path": "/dev/sr0",
Nov 29 06:50:03 compute-0 elastic_varahamihira[259964]:         "rejected_reasons": [
Nov 29 06:50:03 compute-0 elastic_varahamihira[259964]:             "Insufficient space (<5GB)",
Nov 29 06:50:03 compute-0 elastic_varahamihira[259964]:             "Has a FileSystem"
Nov 29 06:50:03 compute-0 elastic_varahamihira[259964]:         ],
Nov 29 06:50:03 compute-0 elastic_varahamihira[259964]:         "sys_api": {
Nov 29 06:50:03 compute-0 elastic_varahamihira[259964]:             "actuators": null,
Nov 29 06:50:03 compute-0 elastic_varahamihira[259964]:             "device_nodes": "sr0",
Nov 29 06:50:03 compute-0 elastic_varahamihira[259964]:             "devname": "sr0",
Nov 29 06:50:03 compute-0 elastic_varahamihira[259964]:             "human_readable_size": "482.00 KB",
Nov 29 06:50:03 compute-0 elastic_varahamihira[259964]:             "id_bus": "ata",
Nov 29 06:50:03 compute-0 elastic_varahamihira[259964]:             "model": "QEMU DVD-ROM",
Nov 29 06:50:03 compute-0 elastic_varahamihira[259964]:             "nr_requests": "2",
Nov 29 06:50:03 compute-0 elastic_varahamihira[259964]:             "parent": "/dev/sr0",
Nov 29 06:50:03 compute-0 elastic_varahamihira[259964]:             "partitions": {},
Nov 29 06:50:03 compute-0 elastic_varahamihira[259964]:             "path": "/dev/sr0",
Nov 29 06:50:03 compute-0 elastic_varahamihira[259964]:             "removable": "1",
Nov 29 06:50:03 compute-0 elastic_varahamihira[259964]:             "rev": "2.5+",
Nov 29 06:50:03 compute-0 elastic_varahamihira[259964]:             "ro": "0",
Nov 29 06:50:03 compute-0 elastic_varahamihira[259964]:             "rotational": "1",
Nov 29 06:50:03 compute-0 elastic_varahamihira[259964]:             "sas_address": "",
Nov 29 06:50:03 compute-0 elastic_varahamihira[259964]:             "sas_device_handle": "",
Nov 29 06:50:03 compute-0 elastic_varahamihira[259964]:             "scheduler_mode": "mq-deadline",
Nov 29 06:50:03 compute-0 elastic_varahamihira[259964]:             "sectors": 0,
Nov 29 06:50:03 compute-0 elastic_varahamihira[259964]:             "sectorsize": "2048",
Nov 29 06:50:03 compute-0 elastic_varahamihira[259964]:             "size": 493568.0,
Nov 29 06:50:03 compute-0 elastic_varahamihira[259964]:             "support_discard": "2048",
Nov 29 06:50:03 compute-0 elastic_varahamihira[259964]:             "type": "disk",
Nov 29 06:50:03 compute-0 elastic_varahamihira[259964]:             "vendor": "QEMU"
Nov 29 06:50:03 compute-0 elastic_varahamihira[259964]:         }
Nov 29 06:50:03 compute-0 elastic_varahamihira[259964]:     }
Nov 29 06:50:03 compute-0 elastic_varahamihira[259964]: ]
Nov 29 06:50:03 compute-0 systemd[1]: libpod-7ae069206aa30e87010361f0730759e8f253ad0bef4b4d9169c0d134ef7f39c9.scope: Deactivated successfully.
Nov 29 06:50:03 compute-0 systemd[1]: libpod-7ae069206aa30e87010361f0730759e8f253ad0bef4b4d9169c0d134ef7f39c9.scope: Consumed 1.208s CPU time.
Nov 29 06:50:03 compute-0 podman[259948]: 2025-11-29 06:50:03.887561632 +0000 UTC m=+1.300992697 container died 7ae069206aa30e87010361f0730759e8f253ad0bef4b4d9169c0d134ef7f39c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_varahamihira, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 06:50:04 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:50:04 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:50:04 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:50:04.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:50:04 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:50:04 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:50:04 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:50:04.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:50:04 compute-0 ceph-mon[74654]: from='client.? 192.168.122.10:0/2319096117' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 06:50:04 compute-0 ceph-mon[74654]: from='client.? 192.168.122.10:0/2319096117' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 06:50:04 compute-0 ceph-mon[74654]: pgmap v1128: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:50:04 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1129: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:50:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-8afd11d7f70042ca1d8a4f56df57e03b639bcd4fe773a0d22bb64f607027be07-merged.mount: Deactivated successfully.
Nov 29 06:50:05 compute-0 sshd-session[261081]: Invalid user support from 118.193.39.127 port 42752
Nov 29 06:50:05 compute-0 sshd-session[261081]: Received disconnect from 118.193.39.127 port 42752:11: Bye Bye [preauth]
Nov 29 06:50:05 compute-0 sshd-session[261081]: Disconnected from invalid user support 118.193.39.127 port 42752 [preauth]
Nov 29 06:50:06 compute-0 podman[259948]: 2025-11-29 06:50:06.112100509 +0000 UTC m=+3.525531514 container remove 7ae069206aa30e87010361f0730759e8f253ad0bef4b4d9169c0d134ef7f39c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_varahamihira, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 06:50:06 compute-0 systemd[1]: libpod-conmon-7ae069206aa30e87010361f0730759e8f253ad0bef4b4d9169c0d134ef7f39c9.scope: Deactivated successfully.
Nov 29 06:50:06 compute-0 sudo[259842]: pam_unix(sudo:session): session closed for user root
Nov 29 06:50:06 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 06:50:06 compute-0 ceph-mon[74654]: pgmap v1129: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:50:06 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:50:06 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 06:50:06 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:50:06 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:50:06 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:50:06.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:50:06 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:50:06 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 06:50:06 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:50:06 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 06:50:06 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 06:50:06 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 06:50:06 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:50:06 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:50:06 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:50:06.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:50:06 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:50:06 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev c586e1d7-4cfa-46ed-8395-a440055d9e82 does not exist
Nov 29 06:50:06 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev 80a10c9d-7bee-4824-9555-5b2bac572609 does not exist
Nov 29 06:50:06 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev 5949d3f3-a967-415d-aaa7-97dbff98474f does not exist
Nov 29 06:50:06 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 06:50:06 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 06:50:06 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 06:50:06 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 06:50:06 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 06:50:06 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:50:06 compute-0 sudo[261085]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:50:06 compute-0 sudo[261085]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:50:06 compute-0 sudo[261085]: pam_unix(sudo:session): session closed for user root
Nov 29 06:50:06 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1130: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:50:06 compute-0 sudo[261110]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:50:06 compute-0 sudo[261110]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:50:06 compute-0 sudo[261110]: pam_unix(sudo:session): session closed for user root
Nov 29 06:50:07 compute-0 sudo[261135]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:50:07 compute-0 sudo[261135]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:50:07 compute-0 sudo[261135]: pam_unix(sudo:session): session closed for user root
Nov 29 06:50:07 compute-0 sudo[261160]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Nov 29 06:50:07 compute-0 sudo[261160]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:50:07 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:50:07 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:50:07 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:50:07 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 06:50:07 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:50:07 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 06:50:07 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 06:50:07 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:50:07 compute-0 ceph-mon[74654]: pgmap v1130: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:50:07 compute-0 podman[261226]: 2025-11-29 06:50:07.552107343 +0000 UTC m=+0.078095434 container create c8ba3cdd2717fd4dd0ab753c81035442881cf1d4ba9f75d2b7228c0f9a8421fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_torvalds, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 29 06:50:07 compute-0 podman[261226]: 2025-11-29 06:50:07.497086912 +0000 UTC m=+0.023075063 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:50:07 compute-0 systemd[1]: Started libpod-conmon-c8ba3cdd2717fd4dd0ab753c81035442881cf1d4ba9f75d2b7228c0f9a8421fe.scope.
Nov 29 06:50:07 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:50:07 compute-0 podman[261226]: 2025-11-29 06:50:07.657094275 +0000 UTC m=+0.183082416 container init c8ba3cdd2717fd4dd0ab753c81035442881cf1d4ba9f75d2b7228c0f9a8421fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_torvalds, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 29 06:50:07 compute-0 podman[261226]: 2025-11-29 06:50:07.66841033 +0000 UTC m=+0.194398431 container start c8ba3cdd2717fd4dd0ab753c81035442881cf1d4ba9f75d2b7228c0f9a8421fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_torvalds, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 29 06:50:07 compute-0 podman[261226]: 2025-11-29 06:50:07.672485433 +0000 UTC m=+0.198473534 container attach c8ba3cdd2717fd4dd0ab753c81035442881cf1d4ba9f75d2b7228c0f9a8421fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_torvalds, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 06:50:07 compute-0 romantic_torvalds[261242]: 167 167
Nov 29 06:50:07 compute-0 systemd[1]: libpod-c8ba3cdd2717fd4dd0ab753c81035442881cf1d4ba9f75d2b7228c0f9a8421fe.scope: Deactivated successfully.
Nov 29 06:50:07 compute-0 conmon[261242]: conmon c8ba3cdd2717fd4dd0ab <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c8ba3cdd2717fd4dd0ab753c81035442881cf1d4ba9f75d2b7228c0f9a8421fe.scope/container/memory.events
Nov 29 06:50:07 compute-0 podman[261226]: 2025-11-29 06:50:07.676870895 +0000 UTC m=+0.202858966 container died c8ba3cdd2717fd4dd0ab753c81035442881cf1d4ba9f75d2b7228c0f9a8421fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_torvalds, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 06:50:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-2951d8c66507f64fcc8fafe3ae8d7d53470ec114c7db00fd0df73eb1d1c3f0e0-merged.mount: Deactivated successfully.
Nov 29 06:50:07 compute-0 podman[261226]: 2025-11-29 06:50:07.78917051 +0000 UTC m=+0.315158611 container remove c8ba3cdd2717fd4dd0ab753c81035442881cf1d4ba9f75d2b7228c0f9a8421fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_torvalds, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 29 06:50:07 compute-0 systemd[1]: libpod-conmon-c8ba3cdd2717fd4dd0ab753c81035442881cf1d4ba9f75d2b7228c0f9a8421fe.scope: Deactivated successfully.
Nov 29 06:50:07 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:50:07 compute-0 podman[261266]: 2025-11-29 06:50:07.962104043 +0000 UTC m=+0.044010256 container create c969f9b54c5fd6b488bae5d0b88fd44298c1a9798b470ea7a61d9a5e2ceae08b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_mcnulty, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 06:50:07 compute-0 systemd[1]: Started libpod-conmon-c969f9b54c5fd6b488bae5d0b88fd44298c1a9798b470ea7a61d9a5e2ceae08b.scope.
Nov 29 06:50:08 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:50:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3842841a5cd32a4d3368ba1fdde768ca6e88f2eda84436d1e9a519e113b568ae/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 06:50:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3842841a5cd32a4d3368ba1fdde768ca6e88f2eda84436d1e9a519e113b568ae/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:50:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3842841a5cd32a4d3368ba1fdde768ca6e88f2eda84436d1e9a519e113b568ae/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:50:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3842841a5cd32a4d3368ba1fdde768ca6e88f2eda84436d1e9a519e113b568ae/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 06:50:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3842841a5cd32a4d3368ba1fdde768ca6e88f2eda84436d1e9a519e113b568ae/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 06:50:08 compute-0 podman[261266]: 2025-11-29 06:50:07.94440412 +0000 UTC m=+0.026310243 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:50:08 compute-0 podman[261266]: 2025-11-29 06:50:08.043987082 +0000 UTC m=+0.125893215 container init c969f9b54c5fd6b488bae5d0b88fd44298c1a9798b470ea7a61d9a5e2ceae08b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_mcnulty, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 29 06:50:08 compute-0 podman[261266]: 2025-11-29 06:50:08.051033018 +0000 UTC m=+0.132939131 container start c969f9b54c5fd6b488bae5d0b88fd44298c1a9798b470ea7a61d9a5e2ceae08b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_mcnulty, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 06:50:08 compute-0 podman[261266]: 2025-11-29 06:50:08.054385931 +0000 UTC m=+0.136292074 container attach c969f9b54c5fd6b488bae5d0b88fd44298c1a9798b470ea7a61d9a5e2ceae08b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_mcnulty, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 06:50:08 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:50:08 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:50:08 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:50:08.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:50:08 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:50:08 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:50:08 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:50:08.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:50:08 compute-0 thirsty_mcnulty[261283]: --> passed data devices: 0 physical, 1 LVM
Nov 29 06:50:08 compute-0 thirsty_mcnulty[261283]: --> relative data size: 1.0
Nov 29 06:50:08 compute-0 thirsty_mcnulty[261283]: --> All data devices are unavailable
Nov 29 06:50:08 compute-0 systemd[1]: libpod-c969f9b54c5fd6b488bae5d0b88fd44298c1a9798b470ea7a61d9a5e2ceae08b.scope: Deactivated successfully.
Nov 29 06:50:08 compute-0 podman[261266]: 2025-11-29 06:50:08.946411086 +0000 UTC m=+1.028317249 container died c969f9b54c5fd6b488bae5d0b88fd44298c1a9798b470ea7a61d9a5e2ceae08b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_mcnulty, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 06:50:08 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1131: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:50:09 compute-0 ceph-mon[74654]: pgmap v1131: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:50:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-3842841a5cd32a4d3368ba1fdde768ca6e88f2eda84436d1e9a519e113b568ae-merged.mount: Deactivated successfully.
Nov 29 06:50:10 compute-0 podman[261266]: 2025-11-29 06:50:10.110591083 +0000 UTC m=+2.192497196 container remove c969f9b54c5fd6b488bae5d0b88fd44298c1a9798b470ea7a61d9a5e2ceae08b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_mcnulty, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 06:50:10 compute-0 sudo[261160]: pam_unix(sudo:session): session closed for user root
Nov 29 06:50:10 compute-0 podman[261299]: 2025-11-29 06:50:10.202369127 +0000 UTC m=+1.229761303 container health_status 843911ed0b6203707f0633a7e737420fbf54d55170a2d9cdc86db1752ff76af8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Nov 29 06:50:10 compute-0 systemd[1]: libpod-conmon-c969f9b54c5fd6b488bae5d0b88fd44298c1a9798b470ea7a61d9a5e2ceae08b.scope: Deactivated successfully.
Nov 29 06:50:10 compute-0 sudo[261330]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:50:10 compute-0 sudo[261330]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:50:10 compute-0 sudo[261330]: pam_unix(sudo:session): session closed for user root
Nov 29 06:50:10 compute-0 sudo[261356]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:50:10 compute-0 sudo[261356]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:50:10 compute-0 sudo[261356]: pam_unix(sudo:session): session closed for user root
Nov 29 06:50:10 compute-0 sudo[261381]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:50:10 compute-0 sudo[261381]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:50:10 compute-0 sudo[261381]: pam_unix(sudo:session): session closed for user root
Nov 29 06:50:10 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:50:10 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:50:10 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:50:10.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:50:10 compute-0 sudo[261406]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -- lvm list --format json
Nov 29 06:50:10 compute-0 sudo[261406]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:50:10 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:50:10 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:50:10 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:50:10.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:50:10 compute-0 podman[261472]: 2025-11-29 06:50:10.800394369 +0000 UTC m=+0.023179206 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:50:10 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1132: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:50:11 compute-0 podman[261472]: 2025-11-29 06:50:11.035868042 +0000 UTC m=+0.258652819 container create 281eaee22f7ec0cc81cb074aca450fbe07b33b0f9556413e3c654e38ae5a463c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_noether, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 06:50:11 compute-0 systemd[1]: Started libpod-conmon-281eaee22f7ec0cc81cb074aca450fbe07b33b0f9556413e3c654e38ae5a463c.scope.
Nov 29 06:50:11 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:50:11 compute-0 ceph-mon[74654]: pgmap v1132: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:50:11 compute-0 podman[261472]: 2025-11-29 06:50:11.745204003 +0000 UTC m=+0.967988870 container init 281eaee22f7ec0cc81cb074aca450fbe07b33b0f9556413e3c654e38ae5a463c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_noether, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 29 06:50:11 compute-0 podman[261472]: 2025-11-29 06:50:11.756104386 +0000 UTC m=+0.978889193 container start 281eaee22f7ec0cc81cb074aca450fbe07b33b0f9556413e3c654e38ae5a463c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_noether, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 29 06:50:11 compute-0 awesome_noether[261490]: 167 167
Nov 29 06:50:11 compute-0 systemd[1]: libpod-281eaee22f7ec0cc81cb074aca450fbe07b33b0f9556413e3c654e38ae5a463c.scope: Deactivated successfully.
Nov 29 06:50:11 compute-0 podman[261472]: 2025-11-29 06:50:11.998484842 +0000 UTC m=+1.221269689 container attach 281eaee22f7ec0cc81cb074aca450fbe07b33b0f9556413e3c654e38ae5a463c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_noether, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 29 06:50:11 compute-0 podman[261472]: 2025-11-29 06:50:11.99913578 +0000 UTC m=+1.221920567 container died 281eaee22f7ec0cc81cb074aca450fbe07b33b0f9556413e3c654e38ae5a463c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_noether, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 06:50:12 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:50:12 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:50:12 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:50:12.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:50:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-b1368f4007badfc1dbafd8b52358861f7b40315c33461021e0be2ab92c2c0c91-merged.mount: Deactivated successfully.
Nov 29 06:50:12 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:50:12 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:50:12 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:50:12.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:50:12 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:50:12 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1133: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:50:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 06:50:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:50:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 06:50:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:50:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:50:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:50:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:50:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:50:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:50:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:50:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:50:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:50:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 06:50:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:50:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:50:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:50:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Nov 29 06:50:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:50:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 06:50:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:50:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:50:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:50:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 06:50:13 compute-0 podman[261472]: 2025-11-29 06:50:13.21083626 +0000 UTC m=+2.433621077 container remove 281eaee22f7ec0cc81cb074aca450fbe07b33b0f9556413e3c654e38ae5a463c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_noether, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 06:50:13 compute-0 systemd[1]: libpod-conmon-281eaee22f7ec0cc81cb074aca450fbe07b33b0f9556413e3c654e38ae5a463c.scope: Deactivated successfully.
Nov 29 06:50:13 compute-0 podman[261515]: 2025-11-29 06:50:13.457190396 +0000 UTC m=+0.044240142 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:50:13 compute-0 podman[261515]: 2025-11-29 06:50:13.690768716 +0000 UTC m=+0.277818422 container create aee34b057a7b9af05c001a883b81e8d0789198e401183328e12fbb616f017bc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_mendeleev, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 29 06:50:14 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:50:14 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:50:14 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:50:14.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:50:14 compute-0 systemd[1]: Started libpod-conmon-aee34b057a7b9af05c001a883b81e8d0789198e401183328e12fbb616f017bc2.scope.
Nov 29 06:50:14 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:50:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9cc6b8d35a75c776419672f04e20975e172342ff0142f5ff1f24e0041adecea4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 06:50:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9cc6b8d35a75c776419672f04e20975e172342ff0142f5ff1f24e0041adecea4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:50:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9cc6b8d35a75c776419672f04e20975e172342ff0142f5ff1f24e0041adecea4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:50:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9cc6b8d35a75c776419672f04e20975e172342ff0142f5ff1f24e0041adecea4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 06:50:14 compute-0 ceph-mon[74654]: pgmap v1133: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:50:14 compute-0 podman[261515]: 2025-11-29 06:50:14.667524729 +0000 UTC m=+1.254574485 container init aee34b057a7b9af05c001a883b81e8d0789198e401183328e12fbb616f017bc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_mendeleev, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 06:50:14 compute-0 podman[261515]: 2025-11-29 06:50:14.676603962 +0000 UTC m=+1.263653658 container start aee34b057a7b9af05c001a883b81e8d0789198e401183328e12fbb616f017bc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_mendeleev, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 06:50:14 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:50:14 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 06:50:14 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:50:14.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 06:50:14 compute-0 podman[261515]: 2025-11-29 06:50:14.915267794 +0000 UTC m=+1.502317470 container attach aee34b057a7b9af05c001a883b81e8d0789198e401183328e12fbb616f017bc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_mendeleev, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 29 06:50:14 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1134: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:50:15 compute-0 condescending_mendeleev[261532]: {
Nov 29 06:50:15 compute-0 condescending_mendeleev[261532]:     "1": [
Nov 29 06:50:15 compute-0 condescending_mendeleev[261532]:         {
Nov 29 06:50:15 compute-0 condescending_mendeleev[261532]:             "devices": [
Nov 29 06:50:15 compute-0 condescending_mendeleev[261532]:                 "/dev/loop3"
Nov 29 06:50:15 compute-0 condescending_mendeleev[261532]:             ],
Nov 29 06:50:15 compute-0 condescending_mendeleev[261532]:             "lv_name": "ceph_lv0",
Nov 29 06:50:15 compute-0 condescending_mendeleev[261532]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 06:50:15 compute-0 condescending_mendeleev[261532]:             "lv_size": "7511998464",
Nov 29 06:50:15 compute-0 condescending_mendeleev[261532]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=336ec58c-893b-528f-a0c1-6ed1196bc047,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=91f280f1-e534-4adc-bf70-98711580c2dd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 06:50:15 compute-0 condescending_mendeleev[261532]:             "lv_uuid": "G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP",
Nov 29 06:50:15 compute-0 condescending_mendeleev[261532]:             "name": "ceph_lv0",
Nov 29 06:50:15 compute-0 condescending_mendeleev[261532]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 06:50:15 compute-0 condescending_mendeleev[261532]:             "tags": {
Nov 29 06:50:15 compute-0 condescending_mendeleev[261532]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 06:50:15 compute-0 condescending_mendeleev[261532]:                 "ceph.block_uuid": "G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP",
Nov 29 06:50:15 compute-0 condescending_mendeleev[261532]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 06:50:15 compute-0 condescending_mendeleev[261532]:                 "ceph.cluster_fsid": "336ec58c-893b-528f-a0c1-6ed1196bc047",
Nov 29 06:50:15 compute-0 condescending_mendeleev[261532]:                 "ceph.cluster_name": "ceph",
Nov 29 06:50:15 compute-0 condescending_mendeleev[261532]:                 "ceph.crush_device_class": "",
Nov 29 06:50:15 compute-0 condescending_mendeleev[261532]:                 "ceph.encrypted": "0",
Nov 29 06:50:15 compute-0 condescending_mendeleev[261532]:                 "ceph.osd_fsid": "91f280f1-e534-4adc-bf70-98711580c2dd",
Nov 29 06:50:15 compute-0 condescending_mendeleev[261532]:                 "ceph.osd_id": "1",
Nov 29 06:50:15 compute-0 condescending_mendeleev[261532]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 06:50:15 compute-0 condescending_mendeleev[261532]:                 "ceph.type": "block",
Nov 29 06:50:15 compute-0 condescending_mendeleev[261532]:                 "ceph.vdo": "0"
Nov 29 06:50:15 compute-0 condescending_mendeleev[261532]:             },
Nov 29 06:50:15 compute-0 condescending_mendeleev[261532]:             "type": "block",
Nov 29 06:50:15 compute-0 condescending_mendeleev[261532]:             "vg_name": "ceph_vg0"
Nov 29 06:50:15 compute-0 condescending_mendeleev[261532]:         }
Nov 29 06:50:15 compute-0 condescending_mendeleev[261532]:     ]
Nov 29 06:50:15 compute-0 condescending_mendeleev[261532]: }
Nov 29 06:50:15 compute-0 systemd[1]: libpod-aee34b057a7b9af05c001a883b81e8d0789198e401183328e12fbb616f017bc2.scope: Deactivated successfully.
Nov 29 06:50:15 compute-0 conmon[261532]: conmon aee34b057a7b9af05c00 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-aee34b057a7b9af05c001a883b81e8d0789198e401183328e12fbb616f017bc2.scope/container/memory.events
Nov 29 06:50:15 compute-0 podman[261515]: 2025-11-29 06:50:15.471141383 +0000 UTC m=+2.058191089 container died aee34b057a7b9af05c001a883b81e8d0789198e401183328e12fbb616f017bc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_mendeleev, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 06:50:15 compute-0 sudo[261554]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:50:15 compute-0 sudo[261554]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:50:15 compute-0 sudo[261554]: pam_unix(sudo:session): session closed for user root
Nov 29 06:50:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-9cc6b8d35a75c776419672f04e20975e172342ff0142f5ff1f24e0041adecea4-merged.mount: Deactivated successfully.
Nov 29 06:50:15 compute-0 sudo[261580]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:50:15 compute-0 sudo[261580]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:50:15 compute-0 sudo[261580]: pam_unix(sudo:session): session closed for user root
Nov 29 06:50:15 compute-0 podman[261515]: 2025-11-29 06:50:15.863106302 +0000 UTC m=+2.450156008 container remove aee34b057a7b9af05c001a883b81e8d0789198e401183328e12fbb616f017bc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_mendeleev, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 06:50:15 compute-0 sudo[261406]: pam_unix(sudo:session): session closed for user root
Nov 29 06:50:15 compute-0 ceph-mon[74654]: pgmap v1134: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:50:15 compute-0 systemd[1]: libpod-conmon-aee34b057a7b9af05c001a883b81e8d0789198e401183328e12fbb616f017bc2.scope: Deactivated successfully.
Nov 29 06:50:15 compute-0 sudo[261605]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:50:15 compute-0 sudo[261605]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:50:15 compute-0 sudo[261605]: pam_unix(sudo:session): session closed for user root
Nov 29 06:50:16 compute-0 sudo[261630]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:50:16 compute-0 sudo[261630]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:50:16 compute-0 sudo[261630]: pam_unix(sudo:session): session closed for user root
Nov 29 06:50:16 compute-0 sudo[261655]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:50:16 compute-0 sudo[261655]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:50:16 compute-0 sudo[261655]: pam_unix(sudo:session): session closed for user root
Nov 29 06:50:16 compute-0 sudo[261680]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -- raw list --format json
Nov 29 06:50:16 compute-0 sudo[261680]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:50:16 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:50:16 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:50:16 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:50:16.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:50:16 compute-0 podman[261745]: 2025-11-29 06:50:16.515641532 +0000 UTC m=+0.046003332 container create bee74bc97f6963cd17b24984114892d0abe3d5878423ef7747a8f8d44ddc04e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_northcutt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507)
Nov 29 06:50:16 compute-0 systemd[1]: Started libpod-conmon-bee74bc97f6963cd17b24984114892d0abe3d5878423ef7747a8f8d44ddc04e8.scope.
Nov 29 06:50:16 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:50:16 compute-0 podman[261745]: 2025-11-29 06:50:16.49798997 +0000 UTC m=+0.028351790 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:50:16 compute-0 podman[261745]: 2025-11-29 06:50:16.598204069 +0000 UTC m=+0.128565889 container init bee74bc97f6963cd17b24984114892d0abe3d5878423ef7747a8f8d44ddc04e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_northcutt, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 06:50:16 compute-0 podman[261745]: 2025-11-29 06:50:16.605556594 +0000 UTC m=+0.135918394 container start bee74bc97f6963cd17b24984114892d0abe3d5878423ef7747a8f8d44ddc04e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_northcutt, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 06:50:16 compute-0 zen_northcutt[261761]: 167 167
Nov 29 06:50:16 compute-0 systemd[1]: libpod-bee74bc97f6963cd17b24984114892d0abe3d5878423ef7747a8f8d44ddc04e8.scope: Deactivated successfully.
Nov 29 06:50:16 compute-0 conmon[261761]: conmon bee74bc97f6963cd17b2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bee74bc97f6963cd17b24984114892d0abe3d5878423ef7747a8f8d44ddc04e8.scope/container/memory.events
Nov 29 06:50:16 compute-0 podman[261745]: 2025-11-29 06:50:16.611658374 +0000 UTC m=+0.142020184 container attach bee74bc97f6963cd17b24984114892d0abe3d5878423ef7747a8f8d44ddc04e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_northcutt, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 06:50:16 compute-0 podman[261745]: 2025-11-29 06:50:16.61258978 +0000 UTC m=+0.142951590 container died bee74bc97f6963cd17b24984114892d0abe3d5878423ef7747a8f8d44ddc04e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_northcutt, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 06:50:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-2197e93cc3399d7c2c3e52a7548394ecc61e07190d6f57ae5a7450dd16d162e3-merged.mount: Deactivated successfully.
Nov 29 06:50:16 compute-0 podman[261745]: 2025-11-29 06:50:16.647975554 +0000 UTC m=+0.178337354 container remove bee74bc97f6963cd17b24984114892d0abe3d5878423ef7747a8f8d44ddc04e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_northcutt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 06:50:16 compute-0 systemd[1]: libpod-conmon-bee74bc97f6963cd17b24984114892d0abe3d5878423ef7747a8f8d44ddc04e8.scope: Deactivated successfully.
Nov 29 06:50:16 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:50:16 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:50:16 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:50:16.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:50:16 compute-0 podman[261786]: 2025-11-29 06:50:16.840145491 +0000 UTC m=+0.046807873 container create 769bdc2bba95ca427c8d517b5785d465e525353a67702321195bbac5bbda54fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_tu, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 06:50:16 compute-0 systemd[1]: Started libpod-conmon-769bdc2bba95ca427c8d517b5785d465e525353a67702321195bbac5bbda54fd.scope.
Nov 29 06:50:16 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:50:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89471b8f4db54a33ffcaaa78643d0e97a9035d660e49493a144ce551c721d638/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 06:50:16 compute-0 podman[261786]: 2025-11-29 06:50:16.821803831 +0000 UTC m=+0.028466263 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:50:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89471b8f4db54a33ffcaaa78643d0e97a9035d660e49493a144ce551c721d638/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:50:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89471b8f4db54a33ffcaaa78643d0e97a9035d660e49493a144ce551c721d638/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:50:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89471b8f4db54a33ffcaaa78643d0e97a9035d660e49493a144ce551c721d638/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 06:50:16 compute-0 podman[261786]: 2025-11-29 06:50:16.926660749 +0000 UTC m=+0.133323131 container init 769bdc2bba95ca427c8d517b5785d465e525353a67702321195bbac5bbda54fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_tu, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 06:50:16 compute-0 podman[261786]: 2025-11-29 06:50:16.933582862 +0000 UTC m=+0.140245244 container start 769bdc2bba95ca427c8d517b5785d465e525353a67702321195bbac5bbda54fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_tu, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 29 06:50:16 compute-0 podman[261786]: 2025-11-29 06:50:16.937731897 +0000 UTC m=+0.144394279 container attach 769bdc2bba95ca427c8d517b5785d465e525353a67702321195bbac5bbda54fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_tu, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 06:50:16 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1135: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:50:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:50:17.240 157767 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 06:50:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:50:17.241 157767 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 06:50:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:50:17.241 157767 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 06:50:17 compute-0 ceph-mon[74654]: pgmap v1135: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:50:17 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:50:17 compute-0 elegant_tu[261802]: {
Nov 29 06:50:17 compute-0 elegant_tu[261802]:     "91f280f1-e534-4adc-bf70-98711580c2dd": {
Nov 29 06:50:17 compute-0 elegant_tu[261802]:         "ceph_fsid": "336ec58c-893b-528f-a0c1-6ed1196bc047",
Nov 29 06:50:17 compute-0 elegant_tu[261802]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 06:50:17 compute-0 elegant_tu[261802]:         "osd_id": 1,
Nov 29 06:50:17 compute-0 elegant_tu[261802]:         "osd_uuid": "91f280f1-e534-4adc-bf70-98711580c2dd",
Nov 29 06:50:17 compute-0 elegant_tu[261802]:         "type": "bluestore"
Nov 29 06:50:17 compute-0 elegant_tu[261802]:     }
Nov 29 06:50:17 compute-0 elegant_tu[261802]: }
Nov 29 06:50:17 compute-0 systemd[1]: libpod-769bdc2bba95ca427c8d517b5785d465e525353a67702321195bbac5bbda54fd.scope: Deactivated successfully.
Nov 29 06:50:17 compute-0 podman[261786]: 2025-11-29 06:50:17.879758373 +0000 UTC m=+1.086420785 container died 769bdc2bba95ca427c8d517b5785d465e525353a67702321195bbac5bbda54fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_tu, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 06:50:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-89471b8f4db54a33ffcaaa78643d0e97a9035d660e49493a144ce551c721d638-merged.mount: Deactivated successfully.
Nov 29 06:50:18 compute-0 podman[261786]: 2025-11-29 06:50:18.10816746 +0000 UTC m=+1.314829852 container remove 769bdc2bba95ca427c8d517b5785d465e525353a67702321195bbac5bbda54fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_tu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 29 06:50:18 compute-0 systemd[1]: libpod-conmon-769bdc2bba95ca427c8d517b5785d465e525353a67702321195bbac5bbda54fd.scope: Deactivated successfully.
Nov 29 06:50:18 compute-0 sudo[261680]: pam_unix(sudo:session): session closed for user root
Nov 29 06:50:18 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 06:50:18 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:50:18 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 06:50:18 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:50:18 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev 9b8cfe2c-a2b7-498e-9126-4e66c5faac47 does not exist
Nov 29 06:50:18 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev 2c7f157c-b7db-4ded-beb0-741e69a61a20 does not exist
Nov 29 06:50:18 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev dc72e334-e47a-4453-a457-df9eef8f4032 does not exist
Nov 29 06:50:18 compute-0 sudo[261839]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:50:18 compute-0 sudo[261839]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:50:18 compute-0 sudo[261839]: pam_unix(sudo:session): session closed for user root
Nov 29 06:50:18 compute-0 sudo[261864]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 06:50:18 compute-0 sudo[261864]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:50:18 compute-0 sudo[261864]: pam_unix(sudo:session): session closed for user root
Nov 29 06:50:18 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:50:18 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 06:50:18 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:50:18.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 06:50:18 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:50:18 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:50:18 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:50:18.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:50:18 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1136: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:50:19 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:50:19 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:50:20 compute-0 podman[261890]: 2025-11-29 06:50:20.113204719 +0000 UTC m=+0.066666426 container health_status 81ea2bcb89266a0110a379c2083d8cc042460d4a35c8ed3bf349dd1083925000 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Nov 29 06:50:20 compute-0 podman[261891]: 2025-11-29 06:50:20.145991891 +0000 UTC m=+0.099216792 container health_status b3f42e9a710907b47913576d27471d163da731262c1464357cff24681ce600c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Nov 29 06:50:20 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:50:20 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:50:20 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:50:20.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:50:20 compute-0 ceph-mon[74654]: pgmap v1136: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:50:20 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:50:20 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:50:20 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:50:20.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:50:20 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1137: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:50:21 compute-0 ceph-mon[74654]: pgmap v1137: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:50:22 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:50:22 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:50:22 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:50:22.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:50:22 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:50:22 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:50:22 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:50:22.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:50:22 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:50:22 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1138: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:50:23 compute-0 nova_compute[251877]: 2025-11-29 06:50:23.153 251881 WARNING oslo.service.loopingcall [-] Function 'nova.servicegroup.drivers.db.DbDriver._report_state' run outlasted interval by 10.94 sec
Nov 29 06:50:24 compute-0 ceph-mon[74654]: pgmap v1138: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:50:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:50:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:50:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:50:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:50:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:50:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:50:24 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:50:24 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 06:50:24 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:50:24.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 06:50:24 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:50:24 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:50:24 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:50:24.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:50:24 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1139: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:50:26 compute-0 ceph-mon[74654]: pgmap v1139: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:50:26 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:50:26 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:50:26 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:50:26.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:50:26 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:50:26 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:50:26 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:50:26.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:50:26 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1140: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:50:27 compute-0 ceph-mon[74654]: pgmap v1140: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:50:27 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:50:28 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:50:28 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:50:28 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:50:28.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:50:28 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:50:28 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:50:28 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:50:28.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:50:28 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1141: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:50:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 06:50:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 06:50:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 06:50:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 06:50:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 06:50:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 06:50:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 06:50:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 06:50:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 06:50:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 06:50:30 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:50:30 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:50:30 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:50:30.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:50:30 compute-0 ceph-mon[74654]: pgmap v1141: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:50:30 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:50:30 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:50:30 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:50:30.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:50:30 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1142: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:50:31 compute-0 ceph-mon[74654]: pgmap v1142: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:50:32 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:50:32 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:50:32 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:50:32.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:50:32 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:50:32 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:50:32 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:50:32.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:50:32 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:50:32 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1143: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:50:33 compute-0 ceph-mon[74654]: pgmap v1143: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:50:34 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:50:34 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:50:34 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:50:34.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:50:34 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:50:34 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:50:34 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:50:34.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:50:34 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1144: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:50:35 compute-0 sudo[261944]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:50:35 compute-0 sudo[261944]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:50:35 compute-0 sudo[261944]: pam_unix(sudo:session): session closed for user root
Nov 29 06:50:35 compute-0 sudo[261969]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:50:35 compute-0 sudo[261969]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:50:35 compute-0 sudo[261969]: pam_unix(sudo:session): session closed for user root
Nov 29 06:50:36 compute-0 ceph-mon[74654]: pgmap v1144: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:50:36 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:50:36 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:50:36 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:50:36.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:50:36 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:50:36 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:50:36 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:50:36.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:50:36 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1145: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:50:37 compute-0 ceph-mon[74654]: pgmap v1145: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:50:38 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:50:38 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:50:38 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:50:38 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:50:38.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:50:38 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:50:38 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:50:38 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:50:38.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:50:38 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1146: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:50:39 compute-0 nova_compute[251877]: 2025-11-29 06:50:39.596 251881 DEBUG nova.compute.resource_tracker [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 06:50:39 compute-0 nova_compute[251877]: 2025-11-29 06:50:39.597 251881 DEBUG nova.compute.resource_tracker [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 06:50:39 compute-0 nova_compute[251877]: 2025-11-29 06:50:39.728 251881 DEBUG nova.scheduler.client.report [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Refreshing inventories for resource provider 36ed0248-8d04-4532-95bb-daab89f12202 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 29 06:50:39 compute-0 nova_compute[251877]: 2025-11-29 06:50:39.818 251881 DEBUG nova.scheduler.client.report [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Updating ProviderTree inventory for provider 36ed0248-8d04-4532-95bb-daab89f12202 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 29 06:50:39 compute-0 nova_compute[251877]: 2025-11-29 06:50:39.818 251881 DEBUG nova.compute.provider_tree [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Updating inventory in ProviderTree for provider 36ed0248-8d04-4532-95bb-daab89f12202 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 29 06:50:39 compute-0 nova_compute[251877]: 2025-11-29 06:50:39.833 251881 DEBUG nova.scheduler.client.report [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Refreshing aggregate associations for resource provider 36ed0248-8d04-4532-95bb-daab89f12202, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 29 06:50:39 compute-0 nova_compute[251877]: 2025-11-29 06:50:39.861 251881 DEBUG nova.scheduler.client.report [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Refreshing trait associations for resource provider 36ed0248-8d04-4532-95bb-daab89f12202, traits: COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSE41,HW_CPU_X86_MMX,COMPUTE_ACCELERATORS,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NODE,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSE2,COMPUTE_VOLUME_EXTEND,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 29 06:50:39 compute-0 nova_compute[251877]: 2025-11-29 06:50:39.878 251881 DEBUG oslo_concurrency.processutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 06:50:40 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 06:50:40 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3008365598' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 06:50:40 compute-0 nova_compute[251877]: 2025-11-29 06:50:40.311 251881 DEBUG oslo_concurrency.processutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 06:50:40 compute-0 nova_compute[251877]: 2025-11-29 06:50:40.320 251881 DEBUG nova.compute.provider_tree [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Inventory has not changed in ProviderTree for provider: 36ed0248-8d04-4532-95bb-daab89f12202 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 06:50:40 compute-0 ceph-mon[74654]: pgmap v1146: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:50:40 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:50:40 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:50:40 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:50:40.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:50:40 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:50:40 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:50:40 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:50:40.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:50:40 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1147: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:50:41 compute-0 podman[262018]: 2025-11-29 06:50:41.094677127 +0000 UTC m=+0.067693985 container health_status 843911ed0b6203707f0633a7e737420fbf54d55170a2d9cdc86db1752ff76af8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125)
Nov 29 06:50:41 compute-0 ceph-mon[74654]: from='client.? 192.168.122.100:0/3008365598' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 06:50:41 compute-0 ceph-mon[74654]: from='client.? 192.168.122.102:0/3662469046' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 06:50:41 compute-0 ceph-mon[74654]: from='client.? 192.168.122.101:0/4231777605' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 06:50:41 compute-0 ceph-mon[74654]: pgmap v1147: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:50:42 compute-0 nova_compute[251877]: 2025-11-29 06:50:42.402 251881 WARNING oslo.service.loopingcall [-] Function 'nova.servicegroup.drivers.db.DbDriver._report_state' run outlasted interval by 9.25 sec
Nov 29 06:50:42 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:50:42 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:50:42 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:50:42.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:50:42 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:50:42 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 06:50:42 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:50:42.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 06:50:42 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1148: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:50:43 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:50:44 compute-0 ceph-mon[74654]: pgmap v1148: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:50:44 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:50:44 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:50:44 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:50:44.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:50:44 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:50:44 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:50:44 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:50:44.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:50:44 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1149: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:50:45 compute-0 ceph-mon[74654]: pgmap v1149: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:50:46 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:50:46 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:50:46 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:50:46.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:50:46 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:50:46 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:50:46 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:50:46.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:50:46 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1150: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:50:48 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:50:48 compute-0 ceph-mon[74654]: pgmap v1150: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:50:48 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:50:48 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:50:48 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:50:48.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:50:48 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:50:48 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:50:48 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:50:48.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:50:48 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1151: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:50:50 compute-0 sshd-session[262045]: Invalid user cloudera from 176.109.67.96 port 47174
Nov 29 06:50:50 compute-0 ceph-mon[74654]: pgmap v1151: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:50:50 compute-0 sshd-session[262045]: Received disconnect from 176.109.67.96 port 47174:11: Bye Bye [preauth]
Nov 29 06:50:50 compute-0 sshd-session[262045]: Disconnected from invalid user cloudera 176.109.67.96 port 47174 [preauth]
Nov 29 06:50:50 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:50:50 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:50:50 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:50:50.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:50:50 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:50:50 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 06:50:50 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:50:50.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 06:50:50 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1152: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:50:51 compute-0 podman[262047]: 2025-11-29 06:50:51.139516906 +0000 UTC m=+0.099591242 container health_status 81ea2bcb89266a0110a379c2083d8cc042460d4a35c8ed3bf349dd1083925000 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 29 06:50:51 compute-0 podman[262048]: 2025-11-29 06:50:51.168131823 +0000 UTC m=+0.117402188 container health_status b3f42e9a710907b47913576d27471d163da731262c1464357cff24681ce600c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller)
Nov 29 06:50:52 compute-0 ceph-mon[74654]: pgmap v1152: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:50:52 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:50:52 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:50:52 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:50:52.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:50:52 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:50:52 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:50:52 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:50:52.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:50:52 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1153: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:50:53 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:50:53 compute-0 ceph-mon[74654]: pgmap v1153: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:50:54 compute-0 ceph-mgr[74948]: [balancer INFO root] Optimize plan auto_2025-11-29_06:50:54
Nov 29 06:50:54 compute-0 ceph-mgr[74948]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 06:50:54 compute-0 ceph-mgr[74948]: [balancer INFO root] do_upmap
Nov 29 06:50:54 compute-0 ceph-mgr[74948]: [balancer INFO root] pools ['default.rgw.log', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', '.rgw.root', 'images', 'default.rgw.control', 'volumes', 'vms', 'default.rgw.meta', 'backups', '.mgr']
Nov 29 06:50:54 compute-0 ceph-mgr[74948]: [balancer INFO root] prepared 0/10 changes
Nov 29 06:50:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:50:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:50:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:50:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:50:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:50:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:50:54 compute-0 sshd-session[262094]: Received disconnect from 197.13.24.157 port 46424:11: Bye Bye [preauth]
Nov 29 06:50:54 compute-0 sshd-session[262094]: Disconnected from authenticating user root 197.13.24.157 port 46424 [preauth]
Nov 29 06:50:54 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:50:54 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:50:54 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:50:54.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:50:54 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:50:54 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:50:54 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:50:54.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:50:54 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1154: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:50:55 compute-0 ceph-mon[74654]: pgmap v1154: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:50:56 compute-0 sudo[262099]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:50:56 compute-0 sudo[262099]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:50:56 compute-0 sudo[262099]: pam_unix(sudo:session): session closed for user root
Nov 29 06:50:56 compute-0 sudo[262124]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:50:56 compute-0 sudo[262124]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:50:56 compute-0 sudo[262124]: pam_unix(sudo:session): session closed for user root
Nov 29 06:50:56 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:50:56 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:50:56 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:50:56.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:50:56 compute-0 sshd-session[262097]: Received disconnect from 34.92.81.41 port 54338:11: Bye Bye [preauth]
Nov 29 06:50:56 compute-0 sshd-session[262097]: Disconnected from authenticating user root 34.92.81.41 port 54338 [preauth]
Nov 29 06:50:56 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:50:56 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 06:50:56 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:50:56.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 06:50:56 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1155: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:50:58 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:50:58 compute-0 ceph-mon[74654]: pgmap v1155: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:50:58 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:50:58 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:50:58 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:50:58.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:50:58 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:50:58 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:50:58 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:50:58.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:50:58 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1156: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:51:00 compute-0 ceph-mon[74654]: pgmap v1156: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:51:00 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:51:00 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:51:00 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:51:00.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:51:00 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:51:00 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:51:00 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:51:00.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:51:00 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1157: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:51:01 compute-0 ceph-mon[74654]: pgmap v1157: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:51:02 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:51:02 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:51:02 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:51:02.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:51:02 compute-0 radosgw[93592]: INFO: RGWReshardLock::lock found lock on reshard.0000000009 to be held by another RGW process; skipping for now
Nov 29 06:51:02 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:51:02 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:51:02 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:51:02.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:51:03 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1158: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:51:03 compute-0 ceph-mon[74654]: from='client.? 192.168.122.10:0/3119268237' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 06:51:03 compute-0 ceph-mon[74654]: from='client.? 192.168.122.10:0/3119268237' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 06:51:03 compute-0 radosgw[93592]: INFO: RGWReshardLock::lock found lock on reshard.0000000010 to be held by another RGW process; skipping for now
Nov 29 06:51:03 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:51:04 compute-0 ceph-mon[74654]: pgmap v1158: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:51:04 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:51:04 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:51:04 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:51:04.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:51:04 compute-0 sshd-session[262155]: Invalid user elsearch from 162.214.92.14 port 46396
Nov 29 06:51:04 compute-0 sshd-session[262155]: Received disconnect from 162.214.92.14 port 46396:11: Bye Bye [preauth]
Nov 29 06:51:04 compute-0 sshd-session[262155]: Disconnected from invalid user elsearch 162.214.92.14 port 46396 [preauth]
Nov 29 06:51:04 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:51:04 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:51:04 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:51:04.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:51:05 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1159: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 4.2 KiB/s rd, 0 B/s wr, 7 op/s
Nov 29 06:51:06 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:51:06 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:51:06 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:51:06.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:51:06 compute-0 ceph-mon[74654]: pgmap v1159: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 4.2 KiB/s rd, 0 B/s wr, 7 op/s
Nov 29 06:51:06 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:51:06 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:51:06 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:51:06.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:51:07 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1160: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 54 KiB/s rd, 0 B/s wr, 90 op/s
Nov 29 06:51:07 compute-0 sshd-session[262151]: Invalid user user from 45.78.221.93 port 37912
Nov 29 06:51:07 compute-0 sshd-session[262151]: Received disconnect from 45.78.221.93 port 37912:11: Bye Bye [preauth]
Nov 29 06:51:07 compute-0 sshd-session[262151]: Disconnected from invalid user user 45.78.221.93 port 37912 [preauth]
Nov 29 06:51:07 compute-0 ceph-mon[74654]: pgmap v1160: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 54 KiB/s rd, 0 B/s wr, 90 op/s
Nov 29 06:51:08 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:51:08 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:51:08 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:51:08 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:51:08.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:51:08 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:51:08 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:51:08 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:51:08.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:51:09 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1161: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 74 KiB/s rd, 0 B/s wr, 123 op/s
Nov 29 06:51:10 compute-0 ceph-mon[74654]: pgmap v1161: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 74 KiB/s rd, 0 B/s wr, 123 op/s
Nov 29 06:51:10 compute-0 nova_compute[251877]: 2025-11-29 06:51:10.200 251881 DEBUG nova.scheduler.client.report [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Inventory has not changed for provider 36ed0248-8d04-4532-95bb-daab89f12202 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 06:51:10 compute-0 nova_compute[251877]: 2025-11-29 06:51:10.202 251881 DEBUG nova.compute.resource_tracker [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 06:51:10 compute-0 nova_compute[251877]: 2025-11-29 06:51:10.203 251881 DEBUG oslo_concurrency.lockutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 78.458s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 06:51:10 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:51:10 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:51:10 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:51:10.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:51:10 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:51:10 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:51:10 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:51:10.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:51:11 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1162: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 74 KiB/s rd, 0 B/s wr, 123 op/s
Nov 29 06:51:11 compute-0 sshd-session[262160]: Invalid user dmdba from 49.247.35.31 port 61872
Nov 29 06:51:11 compute-0 podman[262163]: 2025-11-29 06:51:11.577812771 +0000 UTC m=+0.107525754 container health_status 843911ed0b6203707f0633a7e737420fbf54d55170a2d9cdc86db1752ff76af8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, managed_by=edpm_ansible)
Nov 29 06:51:11 compute-0 sshd-session[262160]: Received disconnect from 49.247.35.31 port 61872:11: Bye Bye [preauth]
Nov 29 06:51:11 compute-0 sshd-session[262160]: Disconnected from invalid user dmdba 49.247.35.31 port 61872 [preauth]
Nov 29 06:51:12 compute-0 ceph-mon[74654]: pgmap v1162: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 74 KiB/s rd, 0 B/s wr, 123 op/s
Nov 29 06:51:12 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:51:12 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:51:12 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:51:12.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:51:12 compute-0 nova_compute[251877]: 2025-11-29 06:51:12.697 251881 WARNING oslo.service.loopingcall [-] Function 'nova.servicegroup.drivers.db.DbDriver._report_state' run outlasted interval by 20.29 sec
Nov 29 06:51:12 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:51:12 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:51:12 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:51:12.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:51:13 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1163: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 82 KiB/s rd, 0 B/s wr, 136 op/s
Nov 29 06:51:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 06:51:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:51:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 06:51:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:51:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:51:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:51:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:51:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:51:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:51:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:51:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:51:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:51:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 06:51:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:51:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:51:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:51:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Nov 29 06:51:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:51:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 06:51:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:51:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:51:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:51:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 06:51:13 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:51:13 compute-0 sshd-session[262185]: Invalid user stack from 103.143.238.173 port 37850
Nov 29 06:51:13 compute-0 sshd-session[262185]: Received disconnect from 103.143.238.173 port 37850:11: Bye Bye [preauth]
Nov 29 06:51:13 compute-0 sshd-session[262185]: Disconnected from invalid user stack 103.143.238.173 port 37850 [preauth]
Nov 29 06:51:14 compute-0 ceph-mon[74654]: pgmap v1163: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 82 KiB/s rd, 0 B/s wr, 136 op/s
Nov 29 06:51:14 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:51:14 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:51:14 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:51:14.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:51:14 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:51:14 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:51:14 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:51:14.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:51:15 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1164: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 82 KiB/s rd, 0 B/s wr, 136 op/s
Nov 29 06:51:15 compute-0 ceph-mon[74654]: pgmap v1164: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 82 KiB/s rd, 0 B/s wr, 136 op/s
Nov 29 06:51:16 compute-0 sudo[262188]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:51:16 compute-0 sudo[262188]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:51:16 compute-0 sudo[262188]: pam_unix(sudo:session): session closed for user root
Nov 29 06:51:16 compute-0 sudo[262215]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:51:16 compute-0 sudo[262215]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:51:16 compute-0 sudo[262215]: pam_unix(sudo:session): session closed for user root
Nov 29 06:51:16 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:51:16 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:51:16 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:51:16.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:51:16 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:51:16 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:51:16 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:51:16.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:51:17 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1165: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 78 KiB/s rd, 0 B/s wr, 129 op/s
Nov 29 06:51:17 compute-0 sshd-session[262190]: Received disconnect from 193.163.72.91 port 44094:11: Bye Bye [preauth]
Nov 29 06:51:17 compute-0 sshd-session[262190]: Disconnected from authenticating user root 193.163.72.91 port 44094 [preauth]
Nov 29 06:51:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:51:17.243 157767 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 06:51:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:51:17.244 157767 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 06:51:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:51:17.245 157767 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 06:51:17 compute-0 sshd-session[262240]: Invalid user mcserver from 118.193.39.127 port 54854
Nov 29 06:51:17 compute-0 sshd-session[262240]: Received disconnect from 118.193.39.127 port 54854:11: Bye Bye [preauth]
Nov 29 06:51:17 compute-0 sshd-session[262240]: Disconnected from invalid user mcserver 118.193.39.127 port 54854 [preauth]
Nov 29 06:51:18 compute-0 ceph-mon[74654]: pgmap v1165: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 78 KiB/s rd, 0 B/s wr, 129 op/s
Nov 29 06:51:18 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:51:18 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:51:18 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:51:18 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:51:18.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:51:18 compute-0 sudo[262243]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:51:18 compute-0 sudo[262243]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:51:18 compute-0 sudo[262243]: pam_unix(sudo:session): session closed for user root
Nov 29 06:51:18 compute-0 sudo[262268]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:51:18 compute-0 sudo[262268]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:51:18 compute-0 sudo[262268]: pam_unix(sudo:session): session closed for user root
Nov 29 06:51:18 compute-0 sudo[262293]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:51:18 compute-0 sudo[262293]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:51:18 compute-0 sudo[262293]: pam_unix(sudo:session): session closed for user root
Nov 29 06:51:18 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:51:18 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:51:18 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:51:18.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:51:18 compute-0 sudo[262318]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 06:51:18 compute-0 sudo[262318]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:51:19 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1166: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 28 KiB/s rd, 0 B/s wr, 46 op/s
Nov 29 06:51:19 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 06:51:19 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:51:19 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 06:51:19 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:51:19 compute-0 sudo[262318]: pam_unix(sudo:session): session closed for user root
Nov 29 06:51:19 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Nov 29 06:51:19 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 29 06:51:19 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 29 06:51:19 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 06:51:20 compute-0 ceph-mon[74654]: pgmap v1166: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 28 KiB/s rd, 0 B/s wr, 46 op/s
Nov 29 06:51:20 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:51:20 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:51:20 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 29 06:51:20 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 06:51:20 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:51:20 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:51:20 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:51:20.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:51:20 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:51:20 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:51:20 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:51:20.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:51:21 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1167: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 7.8 KiB/s rd, 0 B/s wr, 13 op/s
Nov 29 06:51:21 compute-0 ceph-mon[74654]: pgmap v1167: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 7.8 KiB/s rd, 0 B/s wr, 13 op/s
Nov 29 06:51:22 compute-0 podman[262376]: 2025-11-29 06:51:22.099685376 +0000 UTC m=+0.063022715 container health_status 81ea2bcb89266a0110a379c2083d8cc042460d4a35c8ed3bf349dd1083925000 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true)
Nov 29 06:51:22 compute-0 podman[262377]: 2025-11-29 06:51:22.152685951 +0000 UTC m=+0.107460312 container health_status b3f42e9a710907b47913576d27471d163da731262c1464357cff24681ce600c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 06:51:22 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 06:51:22 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:51:22 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:51:22 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:51:22.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:51:22 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:51:22 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 06:51:22 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:51:22 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 06:51:22 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:51:22 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 06:51:22 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 06:51:22 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 06:51:22 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:51:22 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:51:22 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:51:22.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:51:22 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:51:22 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev 9fa089c7-a11d-4ae1-8f1a-44bed0fa8eaa does not exist
Nov 29 06:51:22 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev 17afbd12-5284-4de0-924b-2d8f1fe44297 does not exist
Nov 29 06:51:22 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev 1da5f078-eb4b-4580-8ed0-aec8f2edc064 does not exist
Nov 29 06:51:22 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 06:51:22 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 06:51:22 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 06:51:22 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 06:51:22 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 06:51:22 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:51:23 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1168: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 7.8 KiB/s rd, 0 B/s wr, 13 op/s
Nov 29 06:51:23 compute-0 sudo[262419]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:51:23 compute-0 sudo[262419]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:51:23 compute-0 sudo[262419]: pam_unix(sudo:session): session closed for user root
Nov 29 06:51:23 compute-0 sudo[262445]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:51:23 compute-0 sudo[262445]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:51:23 compute-0 sudo[262445]: pam_unix(sudo:session): session closed for user root
Nov 29 06:51:23 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:51:23 compute-0 sudo[262470]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:51:23 compute-0 sudo[262470]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:51:23 compute-0 sudo[262470]: pam_unix(sudo:session): session closed for user root
Nov 29 06:51:23 compute-0 sudo[262495]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Nov 29 06:51:23 compute-0 sudo[262495]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:51:23 compute-0 podman[262561]: 2025-11-29 06:51:23.713781746 +0000 UTC m=+0.043088501 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:51:23 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:51:23 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:51:23 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:51:23 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 06:51:23 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:51:23 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 06:51:23 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 06:51:23 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:51:23 compute-0 ceph-mon[74654]: pgmap v1168: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 7.8 KiB/s rd, 0 B/s wr, 13 op/s
Nov 29 06:51:24 compute-0 podman[262561]: 2025-11-29 06:51:24.006205313 +0000 UTC m=+0.335511968 container create 70aa9f2fa562b64804b2aa49daf23f3c08ef7c3d0266bc1d15ad4083e0b6ceaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_chatterjee, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 06:51:24 compute-0 systemd[1]: Started libpod-conmon-70aa9f2fa562b64804b2aa49daf23f3c08ef7c3d0266bc1d15ad4083e0b6ceaf.scope.
Nov 29 06:51:24 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:51:24 compute-0 podman[262561]: 2025-11-29 06:51:24.292324346 +0000 UTC m=+0.621631091 container init 70aa9f2fa562b64804b2aa49daf23f3c08ef7c3d0266bc1d15ad4083e0b6ceaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_chatterjee, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 29 06:51:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:51:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:51:24 compute-0 podman[262561]: 2025-11-29 06:51:24.303115146 +0000 UTC m=+0.632421801 container start 70aa9f2fa562b64804b2aa49daf23f3c08ef7c3d0266bc1d15ad4083e0b6ceaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_chatterjee, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 29 06:51:24 compute-0 podman[262561]: 2025-11-29 06:51:24.306166391 +0000 UTC m=+0.635473086 container attach 70aa9f2fa562b64804b2aa49daf23f3c08ef7c3d0266bc1d15ad4083e0b6ceaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_chatterjee, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 29 06:51:24 compute-0 systemd[1]: libpod-70aa9f2fa562b64804b2aa49daf23f3c08ef7c3d0266bc1d15ad4083e0b6ceaf.scope: Deactivated successfully.
Nov 29 06:51:24 compute-0 condescending_chatterjee[262578]: 167 167
Nov 29 06:51:24 compute-0 podman[262561]: 2025-11-29 06:51:24.309987828 +0000 UTC m=+0.639294483 container died 70aa9f2fa562b64804b2aa49daf23f3c08ef7c3d0266bc1d15ad4083e0b6ceaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_chatterjee, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 06:51:24 compute-0 conmon[262578]: conmon 70aa9f2fa562b64804b2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-70aa9f2fa562b64804b2aa49daf23f3c08ef7c3d0266bc1d15ad4083e0b6ceaf.scope/container/memory.events
Nov 29 06:51:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:51:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:51:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:51:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:51:24 compute-0 nova_compute[251877]: 2025-11-29 06:51:24.328 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 06:51:24 compute-0 nova_compute[251877]: 2025-11-29 06:51:24.330 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 06:51:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-96c7920229d497bb6dece33ffd45879e3c56a94e1497413c4cfc97038064a4fe-merged.mount: Deactivated successfully.
Nov 29 06:51:24 compute-0 podman[262561]: 2025-11-29 06:51:24.359784443 +0000 UTC m=+0.689091128 container remove 70aa9f2fa562b64804b2aa49daf23f3c08ef7c3d0266bc1d15ad4083e0b6ceaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_chatterjee, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 06:51:24 compute-0 systemd[1]: libpod-conmon-70aa9f2fa562b64804b2aa49daf23f3c08ef7c3d0266bc1d15ad4083e0b6ceaf.scope: Deactivated successfully.
Nov 29 06:51:24 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:51:24 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:51:24 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:51:24.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:51:24 compute-0 podman[262602]: 2025-11-29 06:51:24.566592659 +0000 UTC m=+0.055368442 container create 4bc481d1fca133b730ead2b025de24033300e0382b264b7e5dcc1018767ec224 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_cray, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 06:51:24 compute-0 systemd[1]: Started libpod-conmon-4bc481d1fca133b730ead2b025de24033300e0382b264b7e5dcc1018767ec224.scope.
Nov 29 06:51:24 compute-0 podman[262602]: 2025-11-29 06:51:24.540306207 +0000 UTC m=+0.029081980 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:51:24 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:51:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa68873edf46f0586240f021bf8990e0d7f36674e8c4d4750c6df92850cc4a54/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 06:51:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa68873edf46f0586240f021bf8990e0d7f36674e8c4d4750c6df92850cc4a54/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:51:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa68873edf46f0586240f021bf8990e0d7f36674e8c4d4750c6df92850cc4a54/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:51:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa68873edf46f0586240f021bf8990e0d7f36674e8c4d4750c6df92850cc4a54/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 06:51:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa68873edf46f0586240f021bf8990e0d7f36674e8c4d4750c6df92850cc4a54/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 06:51:24 compute-0 podman[262602]: 2025-11-29 06:51:24.7297787 +0000 UTC m=+0.218554463 container init 4bc481d1fca133b730ead2b025de24033300e0382b264b7e5dcc1018767ec224 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_cray, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 29 06:51:24 compute-0 podman[262602]: 2025-11-29 06:51:24.739447089 +0000 UTC m=+0.228222872 container start 4bc481d1fca133b730ead2b025de24033300e0382b264b7e5dcc1018767ec224 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_cray, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True)
Nov 29 06:51:24 compute-0 podman[262602]: 2025-11-29 06:51:24.763662212 +0000 UTC m=+0.252438055 container attach 4bc481d1fca133b730ead2b025de24033300e0382b264b7e5dcc1018767ec224 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_cray, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 06:51:24 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:51:24 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:51:24 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:51:24.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:51:25 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1169: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:51:25 compute-0 ceph-mon[74654]: pgmap v1169: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:51:25 compute-0 distracted_cray[262618]: --> passed data devices: 0 physical, 1 LVM
Nov 29 06:51:25 compute-0 distracted_cray[262618]: --> relative data size: 1.0
Nov 29 06:51:25 compute-0 distracted_cray[262618]: --> All data devices are unavailable
Nov 29 06:51:25 compute-0 systemd[1]: libpod-4bc481d1fca133b730ead2b025de24033300e0382b264b7e5dcc1018767ec224.scope: Deactivated successfully.
Nov 29 06:51:25 compute-0 podman[262602]: 2025-11-29 06:51:25.737294618 +0000 UTC m=+1.226070401 container died 4bc481d1fca133b730ead2b025de24033300e0382b264b7e5dcc1018767ec224 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_cray, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 06:51:26 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:51:26 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.004000111s ======
Nov 29 06:51:26 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:51:26.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.004000111s
Nov 29 06:51:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-aa68873edf46f0586240f021bf8990e0d7f36674e8c4d4750c6df92850cc4a54-merged.mount: Deactivated successfully.
Nov 29 06:51:26 compute-0 podman[262602]: 2025-11-29 06:51:26.882443697 +0000 UTC m=+2.371219480 container remove 4bc481d1fca133b730ead2b025de24033300e0382b264b7e5dcc1018767ec224 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_cray, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 29 06:51:26 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:51:26 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:51:26 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:51:26.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:51:26 compute-0 sudo[262495]: pam_unix(sudo:session): session closed for user root
Nov 29 06:51:26 compute-0 systemd[1]: libpod-conmon-4bc481d1fca133b730ead2b025de24033300e0382b264b7e5dcc1018767ec224.scope: Deactivated successfully.
Nov 29 06:51:27 compute-0 sudo[262647]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:51:27 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1170: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:51:27 compute-0 sudo[262647]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:51:27 compute-0 sudo[262647]: pam_unix(sudo:session): session closed for user root
Nov 29 06:51:27 compute-0 sudo[262672]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:51:27 compute-0 sudo[262672]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:51:27 compute-0 sudo[262672]: pam_unix(sudo:session): session closed for user root
Nov 29 06:51:27 compute-0 sudo[262700]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:51:27 compute-0 sudo[262700]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:51:27 compute-0 sudo[262700]: pam_unix(sudo:session): session closed for user root
Nov 29 06:51:27 compute-0 sudo[262725]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -- lvm list --format json
Nov 29 06:51:27 compute-0 sudo[262725]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:51:27 compute-0 podman[262791]: 2025-11-29 06:51:27.650658716 +0000 UTC m=+0.046330990 container create 3ffe9df121bcd5d4c6f45188057367ab2b97d94df27d4b3b3ab9a21ec52bebb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_sutherland, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 06:51:27 compute-0 systemd[1]: Started libpod-conmon-3ffe9df121bcd5d4c6f45188057367ab2b97d94df27d4b3b3ab9a21ec52bebb1.scope.
Nov 29 06:51:27 compute-0 podman[262791]: 2025-11-29 06:51:27.630795453 +0000 UTC m=+0.026467747 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:51:27 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:51:27 compute-0 podman[262791]: 2025-11-29 06:51:27.768062203 +0000 UTC m=+0.163734487 container init 3ffe9df121bcd5d4c6f45188057367ab2b97d94df27d4b3b3ab9a21ec52bebb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_sutherland, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 06:51:27 compute-0 podman[262791]: 2025-11-29 06:51:27.776500278 +0000 UTC m=+0.172172582 container start 3ffe9df121bcd5d4c6f45188057367ab2b97d94df27d4b3b3ab9a21ec52bebb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_sutherland, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 06:51:27 compute-0 vigorous_sutherland[262807]: 167 167
Nov 29 06:51:27 compute-0 systemd[1]: libpod-3ffe9df121bcd5d4c6f45188057367ab2b97d94df27d4b3b3ab9a21ec52bebb1.scope: Deactivated successfully.
Nov 29 06:51:27 compute-0 conmon[262807]: conmon 3ffe9df121bcd5d4c6f4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3ffe9df121bcd5d4c6f45188057367ab2b97d94df27d4b3b3ab9a21ec52bebb1.scope/container/memory.events
Nov 29 06:51:27 compute-0 podman[262791]: 2025-11-29 06:51:27.822523009 +0000 UTC m=+0.218195323 container attach 3ffe9df121bcd5d4c6f45188057367ab2b97d94df27d4b3b3ab9a21ec52bebb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_sutherland, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True)
Nov 29 06:51:27 compute-0 podman[262791]: 2025-11-29 06:51:27.823349662 +0000 UTC m=+0.219021986 container died 3ffe9df121bcd5d4c6f45188057367ab2b97d94df27d4b3b3ab9a21ec52bebb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_sutherland, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 06:51:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-b4e9195dae577743af8b239b688a1b70bc57e9b1e489ffb666ce16002efbe703-merged.mount: Deactivated successfully.
Nov 29 06:51:27 compute-0 podman[262791]: 2025-11-29 06:51:27.998952439 +0000 UTC m=+0.394624713 container remove 3ffe9df121bcd5d4c6f45188057367ab2b97d94df27d4b3b3ab9a21ec52bebb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_sutherland, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 29 06:51:28 compute-0 systemd[1]: libpod-conmon-3ffe9df121bcd5d4c6f45188057367ab2b97d94df27d4b3b3ab9a21ec52bebb1.scope: Deactivated successfully.
Nov 29 06:51:28 compute-0 podman[262831]: 2025-11-29 06:51:28.187025083 +0000 UTC m=+0.046384412 container create 58a2dac1b37ce6c84ff82b1f5d996cb27914fcf81671d14f13d90ae94363195e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_jang, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 29 06:51:28 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:51:28 compute-0 systemd[1]: Started libpod-conmon-58a2dac1b37ce6c84ff82b1f5d996cb27914fcf81671d14f13d90ae94363195e.scope.
Nov 29 06:51:28 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:51:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5c13b0a5e9df95195d441afa0a47b19e581040044513173d6d680e94f59ec46/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 06:51:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5c13b0a5e9df95195d441afa0a47b19e581040044513173d6d680e94f59ec46/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:51:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5c13b0a5e9df95195d441afa0a47b19e581040044513173d6d680e94f59ec46/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:51:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5c13b0a5e9df95195d441afa0a47b19e581040044513173d6d680e94f59ec46/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 06:51:28 compute-0 podman[262831]: 2025-11-29 06:51:28.168349273 +0000 UTC m=+0.027708602 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:51:28 compute-0 podman[262831]: 2025-11-29 06:51:28.285033871 +0000 UTC m=+0.144393210 container init 58a2dac1b37ce6c84ff82b1f5d996cb27914fcf81671d14f13d90ae94363195e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_jang, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 06:51:28 compute-0 podman[262831]: 2025-11-29 06:51:28.292714864 +0000 UTC m=+0.152074203 container start 58a2dac1b37ce6c84ff82b1f5d996cb27914fcf81671d14f13d90ae94363195e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_jang, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 06:51:28 compute-0 podman[262831]: 2025-11-29 06:51:28.308313169 +0000 UTC m=+0.167672548 container attach 58a2dac1b37ce6c84ff82b1f5d996cb27914fcf81671d14f13d90ae94363195e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_jang, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 06:51:28 compute-0 ceph-mon[74654]: pgmap v1170: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:51:28 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:51:28 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:51:28 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:51:28.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:51:28 compute-0 sshd-session[262697]: Invalid user oracle from 103.31.39.143 port 49380
Nov 29 06:51:28 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:51:28 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:51:28 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:51:28.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:51:29 compute-0 sshd-session[262697]: Received disconnect from 103.31.39.143 port 49380:11: Bye Bye [preauth]
Nov 29 06:51:29 compute-0 sshd-session[262697]: Disconnected from invalid user oracle 103.31.39.143 port 49380 [preauth]
Nov 29 06:51:29 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1171: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:51:29 compute-0 sharp_jang[262847]: {
Nov 29 06:51:29 compute-0 sharp_jang[262847]:     "1": [
Nov 29 06:51:29 compute-0 sharp_jang[262847]:         {
Nov 29 06:51:29 compute-0 sharp_jang[262847]:             "devices": [
Nov 29 06:51:29 compute-0 sharp_jang[262847]:                 "/dev/loop3"
Nov 29 06:51:29 compute-0 sharp_jang[262847]:             ],
Nov 29 06:51:29 compute-0 sharp_jang[262847]:             "lv_name": "ceph_lv0",
Nov 29 06:51:29 compute-0 sharp_jang[262847]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 06:51:29 compute-0 sharp_jang[262847]:             "lv_size": "7511998464",
Nov 29 06:51:29 compute-0 sharp_jang[262847]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=336ec58c-893b-528f-a0c1-6ed1196bc047,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=91f280f1-e534-4adc-bf70-98711580c2dd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 06:51:29 compute-0 sharp_jang[262847]:             "lv_uuid": "G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP",
Nov 29 06:51:29 compute-0 sharp_jang[262847]:             "name": "ceph_lv0",
Nov 29 06:51:29 compute-0 sharp_jang[262847]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 06:51:29 compute-0 sharp_jang[262847]:             "tags": {
Nov 29 06:51:29 compute-0 sharp_jang[262847]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 06:51:29 compute-0 sharp_jang[262847]:                 "ceph.block_uuid": "G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP",
Nov 29 06:51:29 compute-0 sharp_jang[262847]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 06:51:29 compute-0 sharp_jang[262847]:                 "ceph.cluster_fsid": "336ec58c-893b-528f-a0c1-6ed1196bc047",
Nov 29 06:51:29 compute-0 sharp_jang[262847]:                 "ceph.cluster_name": "ceph",
Nov 29 06:51:29 compute-0 sharp_jang[262847]:                 "ceph.crush_device_class": "",
Nov 29 06:51:29 compute-0 sharp_jang[262847]:                 "ceph.encrypted": "0",
Nov 29 06:51:29 compute-0 sharp_jang[262847]:                 "ceph.osd_fsid": "91f280f1-e534-4adc-bf70-98711580c2dd",
Nov 29 06:51:29 compute-0 sharp_jang[262847]:                 "ceph.osd_id": "1",
Nov 29 06:51:29 compute-0 sharp_jang[262847]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 06:51:29 compute-0 sharp_jang[262847]:                 "ceph.type": "block",
Nov 29 06:51:29 compute-0 sharp_jang[262847]:                 "ceph.vdo": "0"
Nov 29 06:51:29 compute-0 sharp_jang[262847]:             },
Nov 29 06:51:29 compute-0 sharp_jang[262847]:             "type": "block",
Nov 29 06:51:29 compute-0 sharp_jang[262847]:             "vg_name": "ceph_vg0"
Nov 29 06:51:29 compute-0 sharp_jang[262847]:         }
Nov 29 06:51:29 compute-0 sharp_jang[262847]:     ]
Nov 29 06:51:29 compute-0 sharp_jang[262847]: }
Nov 29 06:51:29 compute-0 systemd[1]: libpod-58a2dac1b37ce6c84ff82b1f5d996cb27914fcf81671d14f13d90ae94363195e.scope: Deactivated successfully.
Nov 29 06:51:29 compute-0 podman[262831]: 2025-11-29 06:51:29.185429218 +0000 UTC m=+1.044788547 container died 58a2dac1b37ce6c84ff82b1f5d996cb27914fcf81671d14f13d90ae94363195e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_jang, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 06:51:29 compute-0 nova_compute[251877]: 2025-11-29 06:51:29.402 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 06:51:29 compute-0 nova_compute[251877]: 2025-11-29 06:51:29.404 251881 DEBUG nova.compute.manager [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 06:51:29 compute-0 nova_compute[251877]: 2025-11-29 06:51:29.405 251881 DEBUG nova.compute.manager [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 06:51:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-f5c13b0a5e9df95195d441afa0a47b19e581040044513173d6d680e94f59ec46-merged.mount: Deactivated successfully.
Nov 29 06:51:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 06:51:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 06:51:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 06:51:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 06:51:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 06:51:29 compute-0 ceph-mon[74654]: pgmap v1171: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:51:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 06:51:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 06:51:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 06:51:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 06:51:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 06:51:29 compute-0 nova_compute[251877]: 2025-11-29 06:51:29.766 251881 WARNING oslo.service.loopingcall [-] Function 'nova.servicegroup.drivers.db.DbDriver._report_state' run outlasted interval by 7.07 sec
Nov 29 06:51:30 compute-0 podman[262831]: 2025-11-29 06:51:30.300485659 +0000 UTC m=+2.159844998 container remove 58a2dac1b37ce6c84ff82b1f5d996cb27914fcf81671d14f13d90ae94363195e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_jang, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 29 06:51:30 compute-0 sudo[262725]: pam_unix(sudo:session): session closed for user root
Nov 29 06:51:30 compute-0 systemd[1]: libpod-conmon-58a2dac1b37ce6c84ff82b1f5d996cb27914fcf81671d14f13d90ae94363195e.scope: Deactivated successfully.
Nov 29 06:51:30 compute-0 sudo[262869]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:51:30 compute-0 sudo[262869]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:51:30 compute-0 sudo[262869]: pam_unix(sudo:session): session closed for user root
Nov 29 06:51:30 compute-0 nova_compute[251877]: 2025-11-29 06:51:30.499 251881 DEBUG nova.compute.manager [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 06:51:30 compute-0 nova_compute[251877]: 2025-11-29 06:51:30.501 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 06:51:30 compute-0 nova_compute[251877]: 2025-11-29 06:51:30.501 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 06:51:30 compute-0 nova_compute[251877]: 2025-11-29 06:51:30.502 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 06:51:30 compute-0 nova_compute[251877]: 2025-11-29 06:51:30.502 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 06:51:30 compute-0 nova_compute[251877]: 2025-11-29 06:51:30.502 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 06:51:30 compute-0 nova_compute[251877]: 2025-11-29 06:51:30.502 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 06:51:30 compute-0 nova_compute[251877]: 2025-11-29 06:51:30.503 251881 DEBUG nova.compute.manager [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 06:51:30 compute-0 nova_compute[251877]: 2025-11-29 06:51:30.504 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 06:51:30 compute-0 sudo[262894]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:51:30 compute-0 sudo[262894]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:51:30 compute-0 sudo[262894]: pam_unix(sudo:session): session closed for user root
Nov 29 06:51:30 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:51:30 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:51:30 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:51:30.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:51:30 compute-0 sudo[262919]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:51:30 compute-0 sudo[262919]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:51:30 compute-0 sudo[262919]: pam_unix(sudo:session): session closed for user root
Nov 29 06:51:30 compute-0 sudo[262944]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -- raw list --format json
Nov 29 06:51:30 compute-0 sudo[262944]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:51:30 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:51:30 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:51:30 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:51:30.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:51:31 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1172: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:51:31 compute-0 nova_compute[251877]: 2025-11-29 06:51:31.088 251881 DEBUG oslo_concurrency.lockutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 06:51:31 compute-0 nova_compute[251877]: 2025-11-29 06:51:31.089 251881 DEBUG oslo_concurrency.lockutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 06:51:31 compute-0 nova_compute[251877]: 2025-11-29 06:51:31.089 251881 DEBUG oslo_concurrency.lockutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 06:51:31 compute-0 nova_compute[251877]: 2025-11-29 06:51:31.090 251881 DEBUG nova.compute.resource_tracker [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 06:51:31 compute-0 nova_compute[251877]: 2025-11-29 06:51:31.091 251881 DEBUG oslo_concurrency.processutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 06:51:31 compute-0 podman[263008]: 2025-11-29 06:51:31.070637501 +0000 UTC m=+0.037220556 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:51:31 compute-0 podman[263008]: 2025-11-29 06:51:31.170913532 +0000 UTC m=+0.137496557 container create fb00729189b82a88368c713200ee8ea329094bab188e73092f3c27f493426e98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_cannon, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 29 06:51:31 compute-0 systemd[1]: Started libpod-conmon-fb00729189b82a88368c713200ee8ea329094bab188e73092f3c27f493426e98.scope.
Nov 29 06:51:31 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:51:31 compute-0 podman[263008]: 2025-11-29 06:51:31.443149128 +0000 UTC m=+0.409732183 container init fb00729189b82a88368c713200ee8ea329094bab188e73092f3c27f493426e98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_cannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 29 06:51:31 compute-0 podman[263008]: 2025-11-29 06:51:31.450796971 +0000 UTC m=+0.417380006 container start fb00729189b82a88368c713200ee8ea329094bab188e73092f3c27f493426e98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_cannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 06:51:31 compute-0 eager_cannon[263046]: 167 167
Nov 29 06:51:31 compute-0 systemd[1]: libpod-fb00729189b82a88368c713200ee8ea329094bab188e73092f3c27f493426e98.scope: Deactivated successfully.
Nov 29 06:51:31 compute-0 conmon[263046]: conmon fb00729189b82a88368c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fb00729189b82a88368c713200ee8ea329094bab188e73092f3c27f493426e98.scope/container/memory.events
Nov 29 06:51:31 compute-0 podman[263008]: 2025-11-29 06:51:31.486415842 +0000 UTC m=+0.452998887 container attach fb00729189b82a88368c713200ee8ea329094bab188e73092f3c27f493426e98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_cannon, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 06:51:31 compute-0 podman[263008]: 2025-11-29 06:51:31.487692498 +0000 UTC m=+0.454275513 container died fb00729189b82a88368c713200ee8ea329094bab188e73092f3c27f493426e98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_cannon, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True)
Nov 29 06:51:31 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 06:51:31 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/763194160' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 06:51:31 compute-0 nova_compute[251877]: 2025-11-29 06:51:31.572 251881 DEBUG oslo_concurrency.processutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 06:51:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-8bde9d2d4c7f98f349bfc2b320cd15dbf9184016f5677a7efb6747bd6c92417b-merged.mount: Deactivated successfully.
Nov 29 06:51:31 compute-0 nova_compute[251877]: 2025-11-29 06:51:31.741 251881 WARNING nova.virt.libvirt.driver [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 06:51:31 compute-0 nova_compute[251877]: 2025-11-29 06:51:31.744 251881 DEBUG nova.compute.resource_tracker [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5140MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 06:51:31 compute-0 nova_compute[251877]: 2025-11-29 06:51:31.744 251881 DEBUG oslo_concurrency.lockutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 06:51:31 compute-0 nova_compute[251877]: 2025-11-29 06:51:31.744 251881 DEBUG oslo_concurrency.lockutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 06:51:32 compute-0 podman[263008]: 2025-11-29 06:51:32.250269519 +0000 UTC m=+1.216852534 container remove fb00729189b82a88368c713200ee8ea329094bab188e73092f3c27f493426e98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_cannon, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 29 06:51:32 compute-0 systemd[1]: libpod-conmon-fb00729189b82a88368c713200ee8ea329094bab188e73092f3c27f493426e98.scope: Deactivated successfully.
Nov 29 06:51:32 compute-0 ceph-mon[74654]: pgmap v1172: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:51:32 compute-0 ceph-mon[74654]: from='client.? 192.168.122.101:0/990427680' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 06:51:32 compute-0 ceph-mon[74654]: from='client.? 192.168.122.102:0/2305942895' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 06:51:32 compute-0 ceph-mon[74654]: from='client.? 192.168.122.100:0/763194160' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 06:51:32 compute-0 podman[263074]: 2025-11-29 06:51:32.468140623 +0000 UTC m=+0.085074038 container create 3db71141127a6fb56fcab2a0592f82fa591858fe1b4c37d463d96e081ee5af5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_sammet, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 06:51:32 compute-0 podman[263074]: 2025-11-29 06:51:32.417977217 +0000 UTC m=+0.034910642 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:51:32 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:51:32 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:51:32 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:51:32.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:51:32 compute-0 systemd[1]: Started libpod-conmon-3db71141127a6fb56fcab2a0592f82fa591858fe1b4c37d463d96e081ee5af5f.scope.
Nov 29 06:51:32 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:51:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8ef2d886d4e0785ed8f9d8b511467156f223b7765b15a60c55969eff82f01ce/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 06:51:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8ef2d886d4e0785ed8f9d8b511467156f223b7765b15a60c55969eff82f01ce/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:51:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8ef2d886d4e0785ed8f9d8b511467156f223b7765b15a60c55969eff82f01ce/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:51:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8ef2d886d4e0785ed8f9d8b511467156f223b7765b15a60c55969eff82f01ce/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 06:51:32 compute-0 podman[263074]: 2025-11-29 06:51:32.875049088 +0000 UTC m=+0.491982593 container init 3db71141127a6fb56fcab2a0592f82fa591858fe1b4c37d463d96e081ee5af5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_sammet, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 06:51:32 compute-0 podman[263074]: 2025-11-29 06:51:32.887861304 +0000 UTC m=+0.504794719 container start 3db71141127a6fb56fcab2a0592f82fa591858fe1b4c37d463d96e081ee5af5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_sammet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 06:51:32 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:51:32 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 06:51:32 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:51:32.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 06:51:32 compute-0 nova_compute[251877]: 2025-11-29 06:51:32.919 251881 DEBUG nova.compute.resource_tracker [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 06:51:32 compute-0 nova_compute[251877]: 2025-11-29 06:51:32.920 251881 DEBUG nova.compute.resource_tracker [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 06:51:32 compute-0 nova_compute[251877]: 2025-11-29 06:51:32.943 251881 DEBUG oslo_concurrency.processutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 06:51:33 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1173: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:51:33 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:51:33 compute-0 podman[263074]: 2025-11-29 06:51:33.21362416 +0000 UTC m=+0.830557575 container attach 3db71141127a6fb56fcab2a0592f82fa591858fe1b4c37d463d96e081ee5af5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_sammet, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 29 06:51:33 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 06:51:33 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4133065214' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 06:51:33 compute-0 nova_compute[251877]: 2025-11-29 06:51:33.688 251881 DEBUG oslo_concurrency.processutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.745s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 06:51:33 compute-0 nova_compute[251877]: 2025-11-29 06:51:33.696 251881 DEBUG nova.compute.provider_tree [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Inventory has not changed in ProviderTree for provider: 36ed0248-8d04-4532-95bb-daab89f12202 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 06:51:33 compute-0 tender_sammet[263090]: {
Nov 29 06:51:33 compute-0 tender_sammet[263090]:     "91f280f1-e534-4adc-bf70-98711580c2dd": {
Nov 29 06:51:33 compute-0 tender_sammet[263090]:         "ceph_fsid": "336ec58c-893b-528f-a0c1-6ed1196bc047",
Nov 29 06:51:33 compute-0 tender_sammet[263090]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 06:51:33 compute-0 tender_sammet[263090]:         "osd_id": 1,
Nov 29 06:51:33 compute-0 tender_sammet[263090]:         "osd_uuid": "91f280f1-e534-4adc-bf70-98711580c2dd",
Nov 29 06:51:33 compute-0 tender_sammet[263090]:         "type": "bluestore"
Nov 29 06:51:33 compute-0 tender_sammet[263090]:     }
Nov 29 06:51:33 compute-0 tender_sammet[263090]: }
Nov 29 06:51:33 compute-0 systemd[1]: libpod-3db71141127a6fb56fcab2a0592f82fa591858fe1b4c37d463d96e081ee5af5f.scope: Deactivated successfully.
Nov 29 06:51:33 compute-0 podman[263074]: 2025-11-29 06:51:33.752578109 +0000 UTC m=+1.369511524 container died 3db71141127a6fb56fcab2a0592f82fa591858fe1b4c37d463d96e081ee5af5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_sammet, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 29 06:51:33 compute-0 ceph-mon[74654]: pgmap v1173: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:51:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-d8ef2d886d4e0785ed8f9d8b511467156f223b7765b15a60c55969eff82f01ce-merged.mount: Deactivated successfully.
Nov 29 06:51:34 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:51:34 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:51:34 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:51:34.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:51:34 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:51:34 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:51:34 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:51:34.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:51:35 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1174: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:51:35 compute-0 ceph-mon[74654]: from='client.? 192.168.122.101:0/3831186218' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 06:51:35 compute-0 ceph-mon[74654]: from='client.? 192.168.122.102:0/1735230033' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 06:51:35 compute-0 ceph-mon[74654]: from='client.? 192.168.122.100:0/4133065214' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 06:51:35 compute-0 podman[263074]: 2025-11-29 06:51:35.212162189 +0000 UTC m=+2.829095614 container remove 3db71141127a6fb56fcab2a0592f82fa591858fe1b4c37d463d96e081ee5af5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_sammet, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 06:51:35 compute-0 systemd[1]: libpod-conmon-3db71141127a6fb56fcab2a0592f82fa591858fe1b4c37d463d96e081ee5af5f.scope: Deactivated successfully.
Nov 29 06:51:35 compute-0 sudo[262944]: pam_unix(sudo:session): session closed for user root
Nov 29 06:51:35 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 06:51:35 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:51:35 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 06:51:35 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:51:35 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev 02fc4dca-0dba-485d-9d31-6521235950f7 does not exist
Nov 29 06:51:35 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev 85cac403-ec7d-4055-a086-e80cfec1d036 does not exist
Nov 29 06:51:35 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev dd2363ae-eb36-4341-9f5e-466e076d25d7 does not exist
Nov 29 06:51:35 compute-0 sudo[263148]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:51:35 compute-0 sudo[263148]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:51:35 compute-0 sudo[263148]: pam_unix(sudo:session): session closed for user root
Nov 29 06:51:36 compute-0 sudo[263173]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 06:51:36 compute-0 sudo[263173]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:51:36 compute-0 sudo[263173]: pam_unix(sudo:session): session closed for user root
Nov 29 06:51:36 compute-0 ceph-mon[74654]: pgmap v1174: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:51:36 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:51:36 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:51:36 compute-0 sudo[263198]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:51:36 compute-0 sudo[263198]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:51:36 compute-0 sudo[263198]: pam_unix(sudo:session): session closed for user root
Nov 29 06:51:36 compute-0 sudo[263223]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:51:36 compute-0 sudo[263223]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:51:36 compute-0 sudo[263223]: pam_unix(sudo:session): session closed for user root
Nov 29 06:51:36 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:51:36 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:51:36 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:51:36.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:51:36 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:51:36 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:51:36 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:51:36.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:51:37 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1175: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:51:37 compute-0 ceph-mon[74654]: pgmap v1175: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:51:38 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:51:38 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:51:38 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:51:38 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:51:38.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:51:38 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:51:38 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:51:38 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:51:38.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:51:39 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1176: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:51:40 compute-0 nova_compute[251877]: 2025-11-29 06:51:40.526 251881 DEBUG nova.scheduler.client.report [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Inventory has not changed for provider 36ed0248-8d04-4532-95bb-daab89f12202 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 06:51:40 compute-0 nova_compute[251877]: 2025-11-29 06:51:40.528 251881 DEBUG nova.compute.resource_tracker [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 06:51:40 compute-0 nova_compute[251877]: 2025-11-29 06:51:40.529 251881 DEBUG oslo_concurrency.lockutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 8.784s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 06:51:40 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:51:40 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.002000056s ======
Nov 29 06:51:40 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:51:40.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000056s
Nov 29 06:51:40 compute-0 ceph-mon[74654]: pgmap v1176: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:51:40 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:51:40 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:51:40 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:51:40.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:51:41 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1177: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:51:42 compute-0 podman[263251]: 2025-11-29 06:51:42.169403335 +0000 UTC m=+0.119202258 container health_status 843911ed0b6203707f0633a7e737420fbf54d55170a2d9cdc86db1752ff76af8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 29 06:51:42 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:51:42 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:51:42 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:51:42.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:51:42 compute-0 ceph-mon[74654]: pgmap v1177: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:51:42 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:51:42 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:51:42 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:51:42.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:51:43 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1178: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:51:43 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:51:44 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:51:44 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:51:44 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:51:44.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:51:44 compute-0 ceph-mon[74654]: pgmap v1178: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:51:44 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:51:44 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 06:51:44 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:51:44.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 06:51:45 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1179: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:51:45 compute-0 ceph-mon[74654]: pgmap v1179: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:51:46 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:51:46 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:51:46 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:51:46.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:51:46 compute-0 sshd-session[263274]: Invalid user ubuntu from 103.63.25.115 port 39368
Nov 29 06:51:46 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:51:46 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:51:46 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:51:46.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:51:47 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1180: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:51:47 compute-0 sshd-session[263274]: Received disconnect from 103.63.25.115 port 39368:11: Bye Bye [preauth]
Nov 29 06:51:47 compute-0 sshd-session[263274]: Disconnected from invalid user ubuntu 103.63.25.115 port 39368 [preauth]
Nov 29 06:51:47 compute-0 ceph-mon[74654]: pgmap v1180: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:51:48 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:51:48 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:51:48 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:51:48 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:51:48.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:51:48 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:51:48 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:51:48 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:51:48.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:51:49 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1181: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:51:49 compute-0 ceph-mon[74654]: pgmap v1181: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:51:50 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:51:50 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:51:50 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:51:50.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:51:50 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:51:50 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:51:50 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:51:50.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:51:51 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1182: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:51:52 compute-0 ceph-mon[74654]: pgmap v1182: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:51:52 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:51:52 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:51:52 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:51:52.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:51:52 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:51:52 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:51:52 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:51:52.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:51:53 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1183: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:51:53 compute-0 podman[263281]: 2025-11-29 06:51:53.120594179 +0000 UTC m=+0.078742119 container health_status 81ea2bcb89266a0110a379c2083d8cc042460d4a35c8ed3bf349dd1083925000 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 29 06:51:53 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:51:53 compute-0 podman[263282]: 2025-11-29 06:51:53.224838692 +0000 UTC m=+0.181206642 container health_status b3f42e9a710907b47913576d27471d163da731262c1464357cff24681ce600c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125)
Nov 29 06:51:53 compute-0 ceph-mon[74654]: pgmap v1183: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:51:54 compute-0 ceph-mgr[74948]: [balancer INFO root] Optimize plan auto_2025-11-29_06:51:54
Nov 29 06:51:54 compute-0 ceph-mgr[74948]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 06:51:54 compute-0 ceph-mgr[74948]: [balancer INFO root] do_upmap
Nov 29 06:51:54 compute-0 ceph-mgr[74948]: [balancer INFO root] pools ['volumes', 'default.rgw.log', 'cephfs.cephfs.data', 'vms', '.mgr', 'backups', 'cephfs.cephfs.meta', 'images', 'default.rgw.control', '.rgw.root', 'default.rgw.meta']
Nov 29 06:51:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:51:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:51:54 compute-0 ceph-mgr[74948]: [balancer INFO root] prepared 0/10 changes
Nov 29 06:51:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:51:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:51:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:51:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:51:54 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:51:54 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:51:54 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:51:54.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:51:54 compute-0 sshd-session[263278]: Connection closed by 101.47.163.116 port 58096 [preauth]
Nov 29 06:51:54 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:51:54 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:51:54 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:51:54.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:51:55 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1184: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:51:56 compute-0 nova_compute[251877]: 2025-11-29 06:51:56.038 251881 WARNING oslo.service.loopingcall [-] Function 'nova.servicegroup.drivers.db.DbDriver._report_state' run outlasted interval by 6.27 sec
Nov 29 06:51:56 compute-0 ceph-mon[74654]: pgmap v1184: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:51:56 compute-0 sudo[263329]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:51:56 compute-0 sudo[263329]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:51:56 compute-0 sudo[263329]: pam_unix(sudo:session): session closed for user root
Nov 29 06:51:56 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:51:56 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:51:56 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:51:56.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:51:56 compute-0 sudo[263354]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:51:56 compute-0 sudo[263354]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:51:56 compute-0 sudo[263354]: pam_unix(sudo:session): session closed for user root
Nov 29 06:51:56 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:51:56 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:51:56 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:51:56.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:51:57 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1185: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:51:57 compute-0 sshd-session[263327]: Received disconnect from 27.112.78.245 port 53000:11: Bye Bye [preauth]
Nov 29 06:51:57 compute-0 sshd-session[263327]: Disconnected from authenticating user root 27.112.78.245 port 53000 [preauth]
Nov 29 06:51:57 compute-0 ceph-mon[74654]: pgmap v1185: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:51:58 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:51:58 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:51:58 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:51:58 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:51:58.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:51:58 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:51:58 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:51:58 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:51:58.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:51:59 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1186: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:52:00 compute-0 ceph-mon[74654]: pgmap v1186: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:52:00 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:52:00 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:52:00 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:52:00.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:52:00 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:52:00 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:52:00 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:52:00.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:52:01 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1187: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:52:02 compute-0 ceph-mon[74654]: pgmap v1187: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:52:02 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:52:02 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:52:02 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:52:02.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:52:02 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:52:02 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:52:02 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:52:02.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:52:03 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1188: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:52:03 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:52:03 compute-0 sshd-session[263382]: Invalid user cumulus from 197.13.24.157 port 40798
Nov 29 06:52:03 compute-0 sshd-session[263382]: Received disconnect from 197.13.24.157 port 40798:11: Bye Bye [preauth]
Nov 29 06:52:03 compute-0 sshd-session[263382]: Disconnected from invalid user cumulus 197.13.24.157 port 40798 [preauth]
Nov 29 06:52:04 compute-0 ceph-mon[74654]: from='client.? 192.168.122.10:0/3126230656' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 06:52:04 compute-0 ceph-mon[74654]: from='client.? 192.168.122.10:0/3126230656' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 06:52:04 compute-0 ceph-mon[74654]: pgmap v1188: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:52:04 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:52:04 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:52:04 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:52:04.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:52:04 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:52:04 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:52:04 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:52:04.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:52:05 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1189: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:52:06 compute-0 ceph-mon[74654]: pgmap v1189: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:52:06 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:52:06 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:52:06 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:52:06.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:52:06 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:52:06 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:52:06 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:52:06.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:52:07 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1190: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:52:07 compute-0 ceph-mon[74654]: pgmap v1190: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:52:08 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:52:08 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:52:08 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:52:08 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:52:08.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:52:08 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:52:08 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:52:08 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:52:08.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:52:09 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1191: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:52:09 compute-0 ceph-mon[74654]: pgmap v1191: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:52:10 compute-0 sshd-session[263388]: Invalid user develop from 176.109.67.96 port 54812
Nov 29 06:52:10 compute-0 sshd-session[263388]: Received disconnect from 176.109.67.96 port 54812:11: Bye Bye [preauth]
Nov 29 06:52:10 compute-0 sshd-session[263388]: Disconnected from invalid user develop 176.109.67.96 port 54812 [preauth]
Nov 29 06:52:10 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:52:10 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:52:10 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:52:10.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:52:10 compute-0 sshd-session[263390]: Invalid user csgoserver from 162.214.92.14 port 45548
Nov 29 06:52:10 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:52:10 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:52:10 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:52:10.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:52:10 compute-0 sshd-session[263390]: Received disconnect from 162.214.92.14 port 45548:11: Bye Bye [preauth]
Nov 29 06:52:10 compute-0 sshd-session[263390]: Disconnected from invalid user csgoserver 162.214.92.14 port 45548 [preauth]
Nov 29 06:52:11 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1192: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:52:12 compute-0 ceph-mon[74654]: pgmap v1192: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:52:12 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:52:12 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:52:12 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:52:12.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:52:12 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:52:12 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:52:12 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:52:12.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:52:13 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1193: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:52:13 compute-0 podman[263393]: 2025-11-29 06:52:13.111816566 +0000 UTC m=+0.070251850 container health_status 843911ed0b6203707f0633a7e737420fbf54d55170a2d9cdc86db1752ff76af8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd)
Nov 29 06:52:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 06:52:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:52:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 06:52:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:52:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:52:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:52:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:52:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:52:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:52:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:52:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:52:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:52:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 06:52:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:52:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:52:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:52:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Nov 29 06:52:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:52:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 06:52:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:52:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:52:13 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:52:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:52:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 06:52:14 compute-0 ceph-mon[74654]: pgmap v1193: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:52:14 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:52:14 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:52:14 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:52:14.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:52:14 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:52:14 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:52:14 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:52:14.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:52:15 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1194: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:52:15 compute-0 ceph-mon[74654]: pgmap v1194: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:52:16 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:52:16 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:52:16 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:52:16.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:52:16 compute-0 sudo[263415]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:52:16 compute-0 sudo[263415]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:52:16 compute-0 sudo[263415]: pam_unix(sudo:session): session closed for user root
Nov 29 06:52:16 compute-0 sudo[263440]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:52:16 compute-0 sudo[263440]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:52:16 compute-0 sudo[263440]: pam_unix(sudo:session): session closed for user root
Nov 29 06:52:16 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:52:16 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:52:16 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:52:16.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:52:17 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1195: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:52:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:52:17.244 157767 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 06:52:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:52:17.246 157767 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 06:52:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:52:17.246 157767 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 06:52:17 compute-0 ceph-mon[74654]: pgmap v1195: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:52:18 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:52:18 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:52:18 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:52:18 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:52:18.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:52:18 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:52:18 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:52:18 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:52:18.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:52:19 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1196: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:52:19 compute-0 sshd-session[263466]: Received disconnect from 34.92.81.41 port 56758:11: Bye Bye [preauth]
Nov 29 06:52:19 compute-0 sshd-session[263466]: Disconnected from authenticating user root 34.92.81.41 port 56758 [preauth]
Nov 29 06:52:20 compute-0 ceph-mon[74654]: pgmap v1196: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:52:20 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:52:20 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:52:20 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:52:20.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:52:20 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:52:20 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:52:20 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:52:20.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:52:21 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1197: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:52:21 compute-0 ceph-mon[74654]: pgmap v1197: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:52:22 compute-0 sshd-session[263470]: Invalid user csgoserver from 103.143.238.173 port 47782
Nov 29 06:52:22 compute-0 sshd-session[263470]: Received disconnect from 103.143.238.173 port 47782:11: Bye Bye [preauth]
Nov 29 06:52:22 compute-0 sshd-session[263470]: Disconnected from invalid user csgoserver 103.143.238.173 port 47782 [preauth]
Nov 29 06:52:22 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:52:22 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:52:22 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:52:22.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:52:22 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:52:22 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:52:22 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:52:22.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:52:23 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1198: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:52:23 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:52:23 compute-0 ceph-mon[74654]: pgmap v1198: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:52:24 compute-0 podman[263473]: 2025-11-29 06:52:24.091280272 +0000 UTC m=+0.059618352 container health_status 81ea2bcb89266a0110a379c2083d8cc042460d4a35c8ed3bf349dd1083925000 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 29 06:52:24 compute-0 podman[263474]: 2025-11-29 06:52:24.133730573 +0000 UTC m=+0.095809528 container health_status b3f42e9a710907b47913576d27471d163da731262c1464357cff24681ce600c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0)
Nov 29 06:52:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:52:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:52:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:52:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:52:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:52:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:52:24 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:52:24 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:52:24 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:52:24.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:52:24 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:52:24 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:52:24 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:52:24.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:52:25 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1199: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:52:25 compute-0 ceph-mon[74654]: pgmap v1199: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:52:26 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:52:26 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:52:26 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:52:26.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:52:26 compute-0 sshd-session[263519]: Invalid user ec2-user from 118.193.39.127 port 34820
Nov 29 06:52:26 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:52:26 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:52:26 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:52:26.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:52:27 compute-0 sshd-session[263519]: Received disconnect from 118.193.39.127 port 34820:11: Bye Bye [preauth]
Nov 29 06:52:27 compute-0 sshd-session[263519]: Disconnected from invalid user ec2-user 118.193.39.127 port 34820 [preauth]
Nov 29 06:52:27 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1200: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:52:27 compute-0 ceph-mon[74654]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #48. Immutable memtables: 0.
Nov 29 06:52:27 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:52:27.499645) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 06:52:27 compute-0 ceph-mon[74654]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 48
Nov 29 06:52:27 compute-0 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764399147499706, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 2107, "num_deletes": 251, "total_data_size": 4139105, "memory_usage": 4200592, "flush_reason": "Manual Compaction"}
Nov 29 06:52:27 compute-0 ceph-mon[74654]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #49: started
Nov 29 06:52:27 compute-0 ceph-mon[74654]: pgmap v1200: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:52:28 compute-0 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764399148184337, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 49, "file_size": 4024087, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 20425, "largest_seqno": 22530, "table_properties": {"data_size": 4014455, "index_size": 6126, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19000, "raw_average_key_size": 20, "raw_value_size": 3995455, "raw_average_value_size": 4214, "num_data_blocks": 274, "num_entries": 948, "num_filter_entries": 948, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764398925, "oldest_key_time": 1764398925, "file_creation_time": 1764399147, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cb6c8f8f-b3b4-4901-9b8e-6f9d7b0da908", "db_session_id": "VL4WOW4AK06DDHF5VQBP", "orig_file_number": 49, "seqno_to_time_mapping": "N/A"}}
Nov 29 06:52:28 compute-0 ceph-mon[74654]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 684750 microseconds, and 16535 cpu microseconds.
Nov 29 06:52:28 compute-0 ceph-mon[74654]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 06:52:28 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:52:28 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:52:28 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:52:28 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:52:28.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:52:28 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:52:28 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:52:28 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:52:28.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:52:29 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1201: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:52:29 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:52:28.184392) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #49: 4024087 bytes OK
Nov 29 06:52:29 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:52:28.184419) [db/memtable_list.cc:519] [default] Level-0 commit table #49 started
Nov 29 06:52:29 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:52:29.127134) [db/memtable_list.cc:722] [default] Level-0 commit table #49: memtable #1 done
Nov 29 06:52:29 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:52:29.127197) EVENT_LOG_v1 {"time_micros": 1764399149127185, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 06:52:29 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:52:29.127232) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 06:52:29 compute-0 ceph-mon[74654]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 4130525, prev total WAL file size 4138461, number of live WAL files 2.
Nov 29 06:52:29 compute-0 ceph-mon[74654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000045.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 06:52:29 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:52:29.154935) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031353036' seq:72057594037927935, type:22 .. '7061786F730031373538' seq:0, type:0; will stop at (end)
Nov 29 06:52:29 compute-0 ceph-mon[74654]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 06:52:29 compute-0 ceph-mon[74654]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [49(3929KB)], [47(7197KB)]
Nov 29 06:52:29 compute-0 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764399149155063, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [49], "files_L6": [47], "score": -1, "input_data_size": 11394448, "oldest_snapshot_seqno": -1}
Nov 29 06:52:29 compute-0 ceph-mon[74654]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #50: 5129 keys, 9348926 bytes, temperature: kUnknown
Nov 29 06:52:29 compute-0 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764399149288673, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 50, "file_size": 9348926, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9314170, "index_size": 20822, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12869, "raw_key_size": 128794, "raw_average_key_size": 25, "raw_value_size": 9220801, "raw_average_value_size": 1797, "num_data_blocks": 857, "num_entries": 5129, "num_filter_entries": 5129, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764396963, "oldest_key_time": 0, "file_creation_time": 1764399149, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cb6c8f8f-b3b4-4901-9b8e-6f9d7b0da908", "db_session_id": "VL4WOW4AK06DDHF5VQBP", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}}
Nov 29 06:52:29 compute-0 ceph-mon[74654]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 06:52:29 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:52:29.289417) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 9348926 bytes
Nov 29 06:52:29 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:52:29.295001) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 85.2 rd, 69.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.8, 7.0 +0.0 blob) out(8.9 +0.0 blob), read-write-amplify(5.2) write-amplify(2.3) OK, records in: 5648, records dropped: 519 output_compression: NoCompression
Nov 29 06:52:29 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:52:29.295043) EVENT_LOG_v1 {"time_micros": 1764399149295024, "job": 24, "event": "compaction_finished", "compaction_time_micros": 133708, "compaction_time_cpu_micros": 42820, "output_level": 6, "num_output_files": 1, "total_output_size": 9348926, "num_input_records": 5648, "num_output_records": 5129, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 06:52:29 compute-0 ceph-mon[74654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000049.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 06:52:29 compute-0 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764399149296739, "job": 24, "event": "table_file_deletion", "file_number": 49}
Nov 29 06:52:29 compute-0 ceph-mon[74654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 06:52:29 compute-0 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764399149299977, "job": 24, "event": "table_file_deletion", "file_number": 47}
Nov 29 06:52:29 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:52:29.154743) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 06:52:29 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:52:29.300029) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 06:52:29 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:52:29.300036) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 06:52:29 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:52:29.300040) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 06:52:29 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:52:29.300044) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 06:52:29 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:52:29.300048) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 06:52:29 compute-0 ceph-mon[74654]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #51. Immutable memtables: 0.
Nov 29 06:52:29 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:52:29.300502) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 06:52:29 compute-0 ceph-mon[74654]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 51
Nov 29 06:52:29 compute-0 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764399149300544, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 269, "num_deletes": 256, "total_data_size": 43721, "memory_usage": 50824, "flush_reason": "Manual Compaction"}
Nov 29 06:52:29 compute-0 ceph-mon[74654]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #52: started
Nov 29 06:52:29 compute-0 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764399149303801, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 52, "file_size": 44177, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 22531, "largest_seqno": 22799, "table_properties": {"data_size": 42314, "index_size": 92, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 4366, "raw_average_key_size": 16, "raw_value_size": 38714, "raw_average_value_size": 145, "num_data_blocks": 4, "num_entries": 266, "num_filter_entries": 266, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764399149, "oldest_key_time": 1764399149, "file_creation_time": 1764399149, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cb6c8f8f-b3b4-4901-9b8e-6f9d7b0da908", "db_session_id": "VL4WOW4AK06DDHF5VQBP", "orig_file_number": 52, "seqno_to_time_mapping": "N/A"}}
Nov 29 06:52:29 compute-0 ceph-mon[74654]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 3352 microseconds, and 1197 cpu microseconds.
Nov 29 06:52:29 compute-0 ceph-mon[74654]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 06:52:29 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:52:29.303855) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #52: 44177 bytes OK
Nov 29 06:52:29 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:52:29.303939) [db/memtable_list.cc:519] [default] Level-0 commit table #52 started
Nov 29 06:52:29 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:52:29.305789) [db/memtable_list.cc:722] [default] Level-0 commit table #52: memtable #1 done
Nov 29 06:52:29 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:52:29.305814) EVENT_LOG_v1 {"time_micros": 1764399149305806, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 06:52:29 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:52:29.305830) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 06:52:29 compute-0 ceph-mon[74654]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 41642, prev total WAL file size 41642, number of live WAL files 2.
Nov 29 06:52:29 compute-0 ceph-mon[74654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000048.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 06:52:29 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:52:29.306323) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00323534' seq:72057594037927935, type:22 .. '6C6F676D00353036' seq:0, type:0; will stop at (end)
Nov 29 06:52:29 compute-0 ceph-mon[74654]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 06:52:29 compute-0 ceph-mon[74654]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [52(43KB)], [50(9129KB)]
Nov 29 06:52:29 compute-0 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764399149306372, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [52], "files_L6": [50], "score": -1, "input_data_size": 9393103, "oldest_snapshot_seqno": -1}
Nov 29 06:52:29 compute-0 ceph-mon[74654]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #53: 4876 keys, 9259051 bytes, temperature: kUnknown
Nov 29 06:52:29 compute-0 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764399149402170, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 53, "file_size": 9259051, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9225451, "index_size": 20306, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12229, "raw_key_size": 124728, "raw_average_key_size": 25, "raw_value_size": 9135937, "raw_average_value_size": 1873, "num_data_blocks": 830, "num_entries": 4876, "num_filter_entries": 4876, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764396963, "oldest_key_time": 0, "file_creation_time": 1764399149, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cb6c8f8f-b3b4-4901-9b8e-6f9d7b0da908", "db_session_id": "VL4WOW4AK06DDHF5VQBP", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Nov 29 06:52:29 compute-0 ceph-mon[74654]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 06:52:29 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:52:29.402857) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 9259051 bytes
Nov 29 06:52:29 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:52:29.405009) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 97.9 rd, 96.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.0, 8.9 +0.0 blob) out(8.8 +0.0 blob), read-write-amplify(422.2) write-amplify(209.6) OK, records in: 5395, records dropped: 519 output_compression: NoCompression
Nov 29 06:52:29 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:52:29.405040) EVENT_LOG_v1 {"time_micros": 1764399149405026, "job": 26, "event": "compaction_finished", "compaction_time_micros": 95897, "compaction_time_cpu_micros": 36542, "output_level": 6, "num_output_files": 1, "total_output_size": 9259051, "num_input_records": 5395, "num_output_records": 4876, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 06:52:29 compute-0 ceph-mon[74654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000052.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 06:52:29 compute-0 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764399149405508, "job": 26, "event": "table_file_deletion", "file_number": 52}
Nov 29 06:52:29 compute-0 ceph-mon[74654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 06:52:29 compute-0 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764399149408847, "job": 26, "event": "table_file_deletion", "file_number": 50}
Nov 29 06:52:29 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:52:29.306264) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 06:52:29 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:52:29.408925) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 06:52:29 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:52:29.408932) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 06:52:29 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:52:29.408935) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 06:52:29 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:52:29.408938) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 06:52:29 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:52:29.408941) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 06:52:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 06:52:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 06:52:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 06:52:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 06:52:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 06:52:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 06:52:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 06:52:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 06:52:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 06:52:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 06:52:30 compute-0 ceph-mon[74654]: pgmap v1201: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:52:30 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:52:30 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:52:30 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:52:30.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:52:31 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:52:31 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:52:31 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:52:31.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:52:31 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1202: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:52:31 compute-0 ceph-mon[74654]: pgmap v1202: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:52:32 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:52:32 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:52:32 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:52:32.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:52:33 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:52:33 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:52:33 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:52:33.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:52:33 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1203: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:52:34 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:52:34 compute-0 ceph-mon[74654]: pgmap v1203: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:52:34 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:52:34 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:52:34 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:52:34.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:52:35 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:52:35 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:52:35 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:52:35.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:52:35 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1204: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:52:35 compute-0 ceph-mon[74654]: pgmap v1204: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:52:36 compute-0 sudo[263526]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:52:36 compute-0 sudo[263526]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:52:36 compute-0 sudo[263526]: pam_unix(sudo:session): session closed for user root
Nov 29 06:52:36 compute-0 sudo[263551]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:52:36 compute-0 sudo[263551]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:52:36 compute-0 sudo[263551]: pam_unix(sudo:session): session closed for user root
Nov 29 06:52:36 compute-0 sudo[263576]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:52:36 compute-0 sudo[263576]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:52:36 compute-0 sudo[263576]: pam_unix(sudo:session): session closed for user root
Nov 29 06:52:36 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:52:36 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.002000056s ======
Nov 29 06:52:36 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:52:36.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000056s
Nov 29 06:52:36 compute-0 sudo[263603]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 06:52:36 compute-0 sudo[263603]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:52:36 compute-0 sudo[263628]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:52:36 compute-0 sudo[263628]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:52:36 compute-0 sudo[263628]: pam_unix(sudo:session): session closed for user root
Nov 29 06:52:36 compute-0 sudo[263660]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:52:36 compute-0 sudo[263660]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:52:36 compute-0 sudo[263660]: pam_unix(sudo:session): session closed for user root
Nov 29 06:52:37 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:52:37 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:52:37 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:52:37.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:52:37 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1205: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:52:37 compute-0 sudo[263603]: pam_unix(sudo:session): session closed for user root
Nov 29 06:52:37 compute-0 sshd-session[263594]: Invalid user usuario2 from 193.163.72.91 port 58360
Nov 29 06:52:37 compute-0 sshd-session[263594]: Received disconnect from 193.163.72.91 port 58360:11: Bye Bye [preauth]
Nov 29 06:52:37 compute-0 sshd-session[263594]: Disconnected from invalid user usuario2 193.163.72.91 port 58360 [preauth]
Nov 29 06:52:38 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 06:52:38 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:52:38 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 06:52:38 compute-0 ceph-mon[74654]: pgmap v1205: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:52:38 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:52:38 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:52:38 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:52:38 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:52:38.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:52:39 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:52:39 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:52:39 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:52:39.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:52:39 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1206: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:52:39 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:52:39 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Nov 29 06:52:39 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 29 06:52:39 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 06:52:39 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:52:39 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 06:52:39 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 06:52:39 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 06:52:39 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:52:39 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev 55cde076-0d46-43a1-9b8b-69f9b9d58e27 does not exist
Nov 29 06:52:39 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev 0510e2d2-ce6d-4bff-ab43-14e1d6a1b85d does not exist
Nov 29 06:52:39 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev 5004ea57-5c85-496f-b414-4cfe043fb214 does not exist
Nov 29 06:52:39 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 06:52:39 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 06:52:39 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 06:52:39 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 06:52:39 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 06:52:39 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:52:39 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:52:39 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:52:39 compute-0 ceph-mon[74654]: pgmap v1206: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:52:39 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 29 06:52:39 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:52:39 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 06:52:39 compute-0 sudo[263712]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:52:39 compute-0 sudo[263712]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:52:39 compute-0 sudo[263712]: pam_unix(sudo:session): session closed for user root
Nov 29 06:52:39 compute-0 sudo[263737]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:52:39 compute-0 sudo[263737]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:52:39 compute-0 sudo[263737]: pam_unix(sudo:session): session closed for user root
Nov 29 06:52:39 compute-0 sudo[263762]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:52:39 compute-0 sudo[263762]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:52:39 compute-0 sudo[263762]: pam_unix(sudo:session): session closed for user root
Nov 29 06:52:39 compute-0 sudo[263787]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Nov 29 06:52:39 compute-0 sudo[263787]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:52:40 compute-0 podman[263852]: 2025-11-29 06:52:40.188060468 +0000 UTC m=+0.045005513 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:52:40 compute-0 podman[263852]: 2025-11-29 06:52:40.464443748 +0000 UTC m=+0.321388743 container create 444f39d8b3c3b06f41ec3e8a9b48b14988ad14ce27b253b3587f3211c1c24bd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_cohen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 06:52:40 compute-0 nova_compute[251877]: 2025-11-29 06:52:40.532 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 06:52:40 compute-0 nova_compute[251877]: 2025-11-29 06:52:40.533 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 06:52:40 compute-0 nova_compute[251877]: 2025-11-29 06:52:40.533 251881 DEBUG nova.compute.manager [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 06:52:40 compute-0 nova_compute[251877]: 2025-11-29 06:52:40.533 251881 DEBUG nova.compute.manager [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 06:52:40 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:52:40 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 06:52:40 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 06:52:40 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:52:40 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:52:40 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:52:40 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:52:40.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:52:40 compute-0 systemd[1]: Started libpod-conmon-444f39d8b3c3b06f41ec3e8a9b48b14988ad14ce27b253b3587f3211c1c24bd2.scope.
Nov 29 06:52:40 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:52:40 compute-0 podman[263852]: 2025-11-29 06:52:40.75839183 +0000 UTC m=+0.615336825 container init 444f39d8b3c3b06f41ec3e8a9b48b14988ad14ce27b253b3587f3211c1c24bd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_cohen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 29 06:52:40 compute-0 podman[263852]: 2025-11-29 06:52:40.772225918 +0000 UTC m=+0.629170873 container start 444f39d8b3c3b06f41ec3e8a9b48b14988ad14ce27b253b3587f3211c1c24bd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_cohen, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 06:52:40 compute-0 podman[263852]: 2025-11-29 06:52:40.77836293 +0000 UTC m=+0.635307975 container attach 444f39d8b3c3b06f41ec3e8a9b48b14988ad14ce27b253b3587f3211c1c24bd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_cohen, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 06:52:40 compute-0 hopeful_cohen[263869]: 167 167
Nov 29 06:52:40 compute-0 systemd[1]: libpod-444f39d8b3c3b06f41ec3e8a9b48b14988ad14ce27b253b3587f3211c1c24bd2.scope: Deactivated successfully.
Nov 29 06:52:40 compute-0 podman[263852]: 2025-11-29 06:52:40.780982213 +0000 UTC m=+0.637927178 container died 444f39d8b3c3b06f41ec3e8a9b48b14988ad14ce27b253b3587f3211c1c24bd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_cohen, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True)
Nov 29 06:52:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-1b876d9cad8d01be45621d05765b94f24f79dec86f669bde1270e1b7626651e9-merged.mount: Deactivated successfully.
Nov 29 06:52:40 compute-0 podman[263852]: 2025-11-29 06:52:40.82722537 +0000 UTC m=+0.684170355 container remove 444f39d8b3c3b06f41ec3e8a9b48b14988ad14ce27b253b3587f3211c1c24bd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_cohen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 06:52:40 compute-0 systemd[1]: libpod-conmon-444f39d8b3c3b06f41ec3e8a9b48b14988ad14ce27b253b3587f3211c1c24bd2.scope: Deactivated successfully.
Nov 29 06:52:41 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:52:41 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:52:41 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:52:41.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:52:41 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1207: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:52:41 compute-0 podman[263892]: 2025-11-29 06:52:41.06866516 +0000 UTC m=+0.064888941 container create 355d706cd92412f2a83773f40f90bbd937b6bf32c6415e15308bdef51bbc5dbe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_cannon, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 06:52:41 compute-0 systemd[1]: Started libpod-conmon-355d706cd92412f2a83773f40f90bbd937b6bf32c6415e15308bdef51bbc5dbe.scope.
Nov 29 06:52:41 compute-0 podman[263892]: 2025-11-29 06:52:41.043783202 +0000 UTC m=+0.040007023 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:52:41 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:52:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34f8b26056ea2a64cfdb4f7d770ab8f4021bbe74c6870cd68df07af7fc9e86a3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 06:52:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34f8b26056ea2a64cfdb4f7d770ab8f4021bbe74c6870cd68df07af7fc9e86a3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:52:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34f8b26056ea2a64cfdb4f7d770ab8f4021bbe74c6870cd68df07af7fc9e86a3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:52:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34f8b26056ea2a64cfdb4f7d770ab8f4021bbe74c6870cd68df07af7fc9e86a3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 06:52:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34f8b26056ea2a64cfdb4f7d770ab8f4021bbe74c6870cd68df07af7fc9e86a3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 06:52:41 compute-0 podman[263892]: 2025-11-29 06:52:41.181422512 +0000 UTC m=+0.177646313 container init 355d706cd92412f2a83773f40f90bbd937b6bf32c6415e15308bdef51bbc5dbe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_cannon, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 29 06:52:41 compute-0 podman[263892]: 2025-11-29 06:52:41.193626484 +0000 UTC m=+0.189850285 container start 355d706cd92412f2a83773f40f90bbd937b6bf32c6415e15308bdef51bbc5dbe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_cannon, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 29 06:52:41 compute-0 podman[263892]: 2025-11-29 06:52:41.198054518 +0000 UTC m=+0.194278339 container attach 355d706cd92412f2a83773f40f90bbd937b6bf32c6415e15308bdef51bbc5dbe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_cannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 06:52:41 compute-0 ceph-mon[74654]: pgmap v1207: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:52:41 compute-0 recursing_cannon[263909]: --> passed data devices: 0 physical, 1 LVM
Nov 29 06:52:41 compute-0 recursing_cannon[263909]: --> relative data size: 1.0
Nov 29 06:52:41 compute-0 recursing_cannon[263909]: --> All data devices are unavailable
Nov 29 06:52:42 compute-0 systemd[1]: libpod-355d706cd92412f2a83773f40f90bbd937b6bf32c6415e15308bdef51bbc5dbe.scope: Deactivated successfully.
Nov 29 06:52:42 compute-0 podman[263892]: 2025-11-29 06:52:42.00413532 +0000 UTC m=+1.000359091 container died 355d706cd92412f2a83773f40f90bbd937b6bf32c6415e15308bdef51bbc5dbe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_cannon, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 06:52:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-34f8b26056ea2a64cfdb4f7d770ab8f4021bbe74c6870cd68df07af7fc9e86a3-merged.mount: Deactivated successfully.
Nov 29 06:52:42 compute-0 podman[263892]: 2025-11-29 06:52:42.058068372 +0000 UTC m=+1.054292163 container remove 355d706cd92412f2a83773f40f90bbd937b6bf32c6415e15308bdef51bbc5dbe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_cannon, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 06:52:42 compute-0 systemd[1]: libpod-conmon-355d706cd92412f2a83773f40f90bbd937b6bf32c6415e15308bdef51bbc5dbe.scope: Deactivated successfully.
Nov 29 06:52:42 compute-0 sudo[263787]: pam_unix(sudo:session): session closed for user root
Nov 29 06:52:42 compute-0 sudo[263938]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:52:42 compute-0 sudo[263938]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:52:42 compute-0 sudo[263938]: pam_unix(sudo:session): session closed for user root
Nov 29 06:52:42 compute-0 sudo[263963]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:52:42 compute-0 sudo[263963]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:52:42 compute-0 sudo[263963]: pam_unix(sudo:session): session closed for user root
Nov 29 06:52:42 compute-0 sudo[263988]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:52:42 compute-0 sudo[263988]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:52:42 compute-0 sudo[263988]: pam_unix(sudo:session): session closed for user root
Nov 29 06:52:42 compute-0 sudo[264013]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -- lvm list --format json
Nov 29 06:52:42 compute-0 sudo[264013]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:52:42 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:52:42 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:52:42 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:52:42.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:52:42 compute-0 podman[264079]: 2025-11-29 06:52:42.779300145 +0000 UTC m=+0.040780815 container create ee5b9ca561a0507d37dd1fe2c602929f2a736933ee1264c7c2e138e256466441 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_herschel, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 29 06:52:42 compute-0 systemd[1]: Started libpod-conmon-ee5b9ca561a0507d37dd1fe2c602929f2a736933ee1264c7c2e138e256466441.scope.
Nov 29 06:52:42 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:52:42 compute-0 podman[264079]: 2025-11-29 06:52:42.761704301 +0000 UTC m=+0.023184991 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:52:42 compute-0 podman[264079]: 2025-11-29 06:52:42.870835041 +0000 UTC m=+0.132315731 container init ee5b9ca561a0507d37dd1fe2c602929f2a736933ee1264c7c2e138e256466441 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_herschel, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 06:52:42 compute-0 podman[264079]: 2025-11-29 06:52:42.880602375 +0000 UTC m=+0.142083045 container start ee5b9ca561a0507d37dd1fe2c602929f2a736933ee1264c7c2e138e256466441 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_herschel, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 06:52:42 compute-0 podman[264079]: 2025-11-29 06:52:42.884676569 +0000 UTC m=+0.146157239 container attach ee5b9ca561a0507d37dd1fe2c602929f2a736933ee1264c7c2e138e256466441 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_herschel, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 06:52:42 compute-0 gifted_herschel[264095]: 167 167
Nov 29 06:52:42 compute-0 systemd[1]: libpod-ee5b9ca561a0507d37dd1fe2c602929f2a736933ee1264c7c2e138e256466441.scope: Deactivated successfully.
Nov 29 06:52:42 compute-0 conmon[264095]: conmon ee5b9ca561a0507d37dd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ee5b9ca561a0507d37dd1fe2c602929f2a736933ee1264c7c2e138e256466441.scope/container/memory.events
Nov 29 06:52:42 compute-0 podman[264079]: 2025-11-29 06:52:42.887448657 +0000 UTC m=+0.148929357 container died ee5b9ca561a0507d37dd1fe2c602929f2a736933ee1264c7c2e138e256466441 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_herschel, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 29 06:52:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-7a86eb36749eb6541f079174421735e0555a8b5518142a8501390198a44f8dba-merged.mount: Deactivated successfully.
Nov 29 06:52:42 compute-0 podman[264079]: 2025-11-29 06:52:42.93889755 +0000 UTC m=+0.200378220 container remove ee5b9ca561a0507d37dd1fe2c602929f2a736933ee1264c7c2e138e256466441 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_herschel, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 06:52:42 compute-0 systemd[1]: libpod-conmon-ee5b9ca561a0507d37dd1fe2c602929f2a736933ee1264c7c2e138e256466441.scope: Deactivated successfully.
Nov 29 06:52:43 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:52:43 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:52:43 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:52:43.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:52:43 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1208: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:52:43 compute-0 podman[264120]: 2025-11-29 06:52:43.186101361 +0000 UTC m=+0.073188943 container create 6b5717a6d62b06a94134bcfb967e60d5c73f7667a35abf5f204c59fc57bcdee9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_faraday, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 06:52:43 compute-0 systemd[1]: Started libpod-conmon-6b5717a6d62b06a94134bcfb967e60d5c73f7667a35abf5f204c59fc57bcdee9.scope.
Nov 29 06:52:43 compute-0 podman[264120]: 2025-11-29 06:52:43.154179176 +0000 UTC m=+0.041266818 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:52:43 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:52:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4b1b3123b3eb65886050116b1f3dc88f8af021b3f239b6cedcd2713a16c9d33/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 06:52:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4b1b3123b3eb65886050116b1f3dc88f8af021b3f239b6cedcd2713a16c9d33/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:52:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4b1b3123b3eb65886050116b1f3dc88f8af021b3f239b6cedcd2713a16c9d33/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:52:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4b1b3123b3eb65886050116b1f3dc88f8af021b3f239b6cedcd2713a16c9d33/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 06:52:43 compute-0 podman[264120]: 2025-11-29 06:52:43.299188082 +0000 UTC m=+0.186275694 container init 6b5717a6d62b06a94134bcfb967e60d5c73f7667a35abf5f204c59fc57bcdee9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_faraday, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 29 06:52:43 compute-0 podman[264120]: 2025-11-29 06:52:43.309198932 +0000 UTC m=+0.196286504 container start 6b5717a6d62b06a94134bcfb967e60d5c73f7667a35abf5f204c59fc57bcdee9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_faraday, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 06:52:43 compute-0 podman[264120]: 2025-11-29 06:52:43.313847513 +0000 UTC m=+0.200935085 container attach 6b5717a6d62b06a94134bcfb967e60d5c73f7667a35abf5f204c59fc57bcdee9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_faraday, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 06:52:43 compute-0 podman[264134]: 2025-11-29 06:52:43.357280651 +0000 UTC m=+0.119953525 container health_status 843911ed0b6203707f0633a7e737420fbf54d55170a2d9cdc86db1752ff76af8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team)
Nov 29 06:52:44 compute-0 vigilant_faraday[264137]: {
Nov 29 06:52:44 compute-0 vigilant_faraday[264137]:     "1": [
Nov 29 06:52:44 compute-0 vigilant_faraday[264137]:         {
Nov 29 06:52:44 compute-0 vigilant_faraday[264137]:             "devices": [
Nov 29 06:52:44 compute-0 vigilant_faraday[264137]:                 "/dev/loop3"
Nov 29 06:52:44 compute-0 vigilant_faraday[264137]:             ],
Nov 29 06:52:44 compute-0 vigilant_faraday[264137]:             "lv_name": "ceph_lv0",
Nov 29 06:52:44 compute-0 vigilant_faraday[264137]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 06:52:44 compute-0 vigilant_faraday[264137]:             "lv_size": "7511998464",
Nov 29 06:52:44 compute-0 vigilant_faraday[264137]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=336ec58c-893b-528f-a0c1-6ed1196bc047,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=91f280f1-e534-4adc-bf70-98711580c2dd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 06:52:44 compute-0 vigilant_faraday[264137]:             "lv_uuid": "G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP",
Nov 29 06:52:44 compute-0 vigilant_faraday[264137]:             "name": "ceph_lv0",
Nov 29 06:52:44 compute-0 vigilant_faraday[264137]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 06:52:44 compute-0 vigilant_faraday[264137]:             "tags": {
Nov 29 06:52:44 compute-0 vigilant_faraday[264137]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 06:52:44 compute-0 vigilant_faraday[264137]:                 "ceph.block_uuid": "G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP",
Nov 29 06:52:44 compute-0 vigilant_faraday[264137]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 06:52:44 compute-0 vigilant_faraday[264137]:                 "ceph.cluster_fsid": "336ec58c-893b-528f-a0c1-6ed1196bc047",
Nov 29 06:52:44 compute-0 vigilant_faraday[264137]:                 "ceph.cluster_name": "ceph",
Nov 29 06:52:44 compute-0 vigilant_faraday[264137]:                 "ceph.crush_device_class": "",
Nov 29 06:52:44 compute-0 vigilant_faraday[264137]:                 "ceph.encrypted": "0",
Nov 29 06:52:44 compute-0 vigilant_faraday[264137]:                 "ceph.osd_fsid": "91f280f1-e534-4adc-bf70-98711580c2dd",
Nov 29 06:52:44 compute-0 vigilant_faraday[264137]:                 "ceph.osd_id": "1",
Nov 29 06:52:44 compute-0 vigilant_faraday[264137]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 06:52:44 compute-0 vigilant_faraday[264137]:                 "ceph.type": "block",
Nov 29 06:52:44 compute-0 vigilant_faraday[264137]:                 "ceph.vdo": "0"
Nov 29 06:52:44 compute-0 vigilant_faraday[264137]:             },
Nov 29 06:52:44 compute-0 vigilant_faraday[264137]:             "type": "block",
Nov 29 06:52:44 compute-0 vigilant_faraday[264137]:             "vg_name": "ceph_vg0"
Nov 29 06:52:44 compute-0 vigilant_faraday[264137]:         }
Nov 29 06:52:44 compute-0 vigilant_faraday[264137]:     ]
Nov 29 06:52:44 compute-0 vigilant_faraday[264137]: }
Nov 29 06:52:44 compute-0 systemd[1]: libpod-6b5717a6d62b06a94134bcfb967e60d5c73f7667a35abf5f204c59fc57bcdee9.scope: Deactivated successfully.
Nov 29 06:52:44 compute-0 podman[264120]: 2025-11-29 06:52:44.05024286 +0000 UTC m=+0.937330402 container died 6b5717a6d62b06a94134bcfb967e60d5c73f7667a35abf5f204c59fc57bcdee9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_faraday, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 06:52:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-f4b1b3123b3eb65886050116b1f3dc88f8af021b3f239b6cedcd2713a16c9d33-merged.mount: Deactivated successfully.
Nov 29 06:52:44 compute-0 podman[264120]: 2025-11-29 06:52:44.111125967 +0000 UTC m=+0.998213509 container remove 6b5717a6d62b06a94134bcfb967e60d5c73f7667a35abf5f204c59fc57bcdee9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_faraday, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 06:52:44 compute-0 systemd[1]: libpod-conmon-6b5717a6d62b06a94134bcfb967e60d5c73f7667a35abf5f204c59fc57bcdee9.scope: Deactivated successfully.
Nov 29 06:52:44 compute-0 sudo[264013]: pam_unix(sudo:session): session closed for user root
Nov 29 06:52:44 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:52:44 compute-0 sudo[264180]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:52:44 compute-0 sudo[264180]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:52:44 compute-0 sudo[264180]: pam_unix(sudo:session): session closed for user root
Nov 29 06:52:44 compute-0 ceph-mon[74654]: pgmap v1208: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:52:44 compute-0 sudo[264205]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:52:44 compute-0 sudo[264205]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:52:44 compute-0 sudo[264205]: pam_unix(sudo:session): session closed for user root
Nov 29 06:52:44 compute-0 sudo[264230]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:52:44 compute-0 sudo[264230]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:52:44 compute-0 sudo[264230]: pam_unix(sudo:session): session closed for user root
Nov 29 06:52:44 compute-0 sudo[264255]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -- raw list --format json
Nov 29 06:52:44 compute-0 sudo[264255]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:52:44 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:52:44 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:52:44 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:52:44.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:52:44 compute-0 podman[264321]: 2025-11-29 06:52:44.86623471 +0000 UTC m=+0.096904789 container create 9e3b8a1a11f259771043b0215248d9534c9b87d94639cb2e73cb4d01a5b8a62b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_ramanujan, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 06:52:44 compute-0 podman[264321]: 2025-11-29 06:52:44.798606413 +0000 UTC m=+0.029276562 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:52:44 compute-0 systemd[1]: Started libpod-conmon-9e3b8a1a11f259771043b0215248d9534c9b87d94639cb2e73cb4d01a5b8a62b.scope.
Nov 29 06:52:44 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:52:45 compute-0 podman[264321]: 2025-11-29 06:52:44.999749553 +0000 UTC m=+0.230419702 container init 9e3b8a1a11f259771043b0215248d9534c9b87d94639cb2e73cb4d01a5b8a62b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_ramanujan, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 29 06:52:45 compute-0 podman[264321]: 2025-11-29 06:52:45.011845832 +0000 UTC m=+0.242515911 container start 9e3b8a1a11f259771043b0215248d9534c9b87d94639cb2e73cb4d01a5b8a62b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_ramanujan, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 06:52:45 compute-0 podman[264321]: 2025-11-29 06:52:45.01638933 +0000 UTC m=+0.247059499 container attach 9e3b8a1a11f259771043b0215248d9534c9b87d94639cb2e73cb4d01a5b8a62b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_ramanujan, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 06:52:45 compute-0 vibrant_ramanujan[264337]: 167 167
Nov 29 06:52:45 compute-0 systemd[1]: libpod-9e3b8a1a11f259771043b0215248d9534c9b87d94639cb2e73cb4d01a5b8a62b.scope: Deactivated successfully.
Nov 29 06:52:45 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:52:45 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 06:52:45 compute-0 conmon[264337]: conmon 9e3b8a1a11f259771043 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9e3b8a1a11f259771043b0215248d9534c9b87d94639cb2e73cb4d01a5b8a62b.scope/container/memory.events
Nov 29 06:52:45 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:52:45.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 06:52:45 compute-0 podman[264321]: 2025-11-29 06:52:45.022214693 +0000 UTC m=+0.252884802 container died 9e3b8a1a11f259771043b0215248d9534c9b87d94639cb2e73cb4d01a5b8a62b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_ramanujan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 29 06:52:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-8764278fbd62e8f219fecdca25fe15f8ddef7e12c440487c974abc7059d2720c-merged.mount: Deactivated successfully.
Nov 29 06:52:45 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1209: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:52:45 compute-0 podman[264321]: 2025-11-29 06:52:45.065960959 +0000 UTC m=+0.296631028 container remove 9e3b8a1a11f259771043b0215248d9534c9b87d94639cb2e73cb4d01a5b8a62b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_ramanujan, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 29 06:52:45 compute-0 systemd[1]: libpod-conmon-9e3b8a1a11f259771043b0215248d9534c9b87d94639cb2e73cb4d01a5b8a62b.scope: Deactivated successfully.
Nov 29 06:52:45 compute-0 podman[264362]: 2025-11-29 06:52:45.242614133 +0000 UTC m=+0.052523814 container create 7195d203970fb6b0419251935e3b2c61bc26c0a953f609ad23befba47b59915b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_poitras, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 29 06:52:45 compute-0 systemd[1]: Started libpod-conmon-7195d203970fb6b0419251935e3b2c61bc26c0a953f609ad23befba47b59915b.scope.
Nov 29 06:52:45 compute-0 podman[264362]: 2025-11-29 06:52:45.221153301 +0000 UTC m=+0.031063022 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:52:45 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:52:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9ac52778b30f401bb59c08ea040498b26cc37aa115d85014e8dec1cfcbd2c39/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 06:52:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9ac52778b30f401bb59c08ea040498b26cc37aa115d85014e8dec1cfcbd2c39/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:52:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9ac52778b30f401bb59c08ea040498b26cc37aa115d85014e8dec1cfcbd2c39/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:52:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9ac52778b30f401bb59c08ea040498b26cc37aa115d85014e8dec1cfcbd2c39/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 06:52:45 compute-0 podman[264362]: 2025-11-29 06:52:45.330638451 +0000 UTC m=+0.140548142 container init 7195d203970fb6b0419251935e3b2c61bc26c0a953f609ad23befba47b59915b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_poitras, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 29 06:52:45 compute-0 podman[264362]: 2025-11-29 06:52:45.341230138 +0000 UTC m=+0.151139809 container start 7195d203970fb6b0419251935e3b2c61bc26c0a953f609ad23befba47b59915b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_poitras, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 06:52:45 compute-0 podman[264362]: 2025-11-29 06:52:45.345269771 +0000 UTC m=+0.155179462 container attach 7195d203970fb6b0419251935e3b2c61bc26c0a953f609ad23befba47b59915b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_poitras, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 29 06:52:46 compute-0 ceph-mon[74654]: pgmap v1209: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:52:46 compute-0 recursing_poitras[264378]: {
Nov 29 06:52:46 compute-0 recursing_poitras[264378]:     "91f280f1-e534-4adc-bf70-98711580c2dd": {
Nov 29 06:52:46 compute-0 recursing_poitras[264378]:         "ceph_fsid": "336ec58c-893b-528f-a0c1-6ed1196bc047",
Nov 29 06:52:46 compute-0 recursing_poitras[264378]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 06:52:46 compute-0 recursing_poitras[264378]:         "osd_id": 1,
Nov 29 06:52:46 compute-0 recursing_poitras[264378]:         "osd_uuid": "91f280f1-e534-4adc-bf70-98711580c2dd",
Nov 29 06:52:46 compute-0 recursing_poitras[264378]:         "type": "bluestore"
Nov 29 06:52:46 compute-0 recursing_poitras[264378]:     }
Nov 29 06:52:46 compute-0 recursing_poitras[264378]: }
Nov 29 06:52:46 compute-0 systemd[1]: libpod-7195d203970fb6b0419251935e3b2c61bc26c0a953f609ad23befba47b59915b.scope: Deactivated successfully.
Nov 29 06:52:46 compute-0 podman[264362]: 2025-11-29 06:52:46.217670742 +0000 UTC m=+1.027580453 container died 7195d203970fb6b0419251935e3b2c61bc26c0a953f609ad23befba47b59915b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_poitras, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 06:52:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-e9ac52778b30f401bb59c08ea040498b26cc37aa115d85014e8dec1cfcbd2c39-merged.mount: Deactivated successfully.
Nov 29 06:52:46 compute-0 podman[264362]: 2025-11-29 06:52:46.352173053 +0000 UTC m=+1.162082734 container remove 7195d203970fb6b0419251935e3b2c61bc26c0a953f609ad23befba47b59915b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_poitras, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 06:52:46 compute-0 systemd[1]: libpod-conmon-7195d203970fb6b0419251935e3b2c61bc26c0a953f609ad23befba47b59915b.scope: Deactivated successfully.
Nov 29 06:52:46 compute-0 sudo[264255]: pam_unix(sudo:session): session closed for user root
Nov 29 06:52:46 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 06:52:46 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:52:46 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 06:52:46 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:52:46 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev c80b7213-8227-4d4b-a2de-8f8fb582a31e does not exist
Nov 29 06:52:46 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev f68f7640-dbb7-4640-bb41-7ac24cafa439 does not exist
Nov 29 06:52:46 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev dd2d459b-7e4f-47e5-ac01-4d063ba52b39 does not exist
Nov 29 06:52:46 compute-0 sudo[264411]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:52:46 compute-0 sudo[264411]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:52:46 compute-0 sudo[264411]: pam_unix(sudo:session): session closed for user root
Nov 29 06:52:46 compute-0 sudo[264436]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 06:52:46 compute-0 sudo[264436]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:52:46 compute-0 sudo[264436]: pam_unix(sudo:session): session closed for user root
Nov 29 06:52:46 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:52:46 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:52:46 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:52:46.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:52:47 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:52:47 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:52:47 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:52:47.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:52:47 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1210: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:52:47 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:52:47 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:52:47 compute-0 ceph-mon[74654]: pgmap v1210: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:52:48 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:52:48 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:52:48 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:52:48.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:52:49 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:52:49 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:52:49 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:52:49.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:52:49 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1211: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:52:49 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:52:49 compute-0 ceph-mon[74654]: pgmap v1211: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:52:50 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:52:50 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:52:50 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:52:50.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:52:50 compute-0 sshd-session[264463]: Received disconnect from 49.247.35.31 port 44016:11: Bye Bye [preauth]
Nov 29 06:52:50 compute-0 sshd-session[264463]: Disconnected from authenticating user root 49.247.35.31 port 44016 [preauth]
Nov 29 06:52:51 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:52:51 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:52:51 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:52:51.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:52:51 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1212: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:52:52 compute-0 ceph-mon[74654]: pgmap v1212: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:52:52 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:52:52 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:52:52 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:52:52.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:52:53 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:52:53 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:52:53 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:52:53.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:52:53 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1213: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:52:53 compute-0 ceph-mon[74654]: pgmap v1213: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:52:54 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:52:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:52:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:52:54 compute-0 ceph-mgr[74948]: [balancer INFO root] Optimize plan auto_2025-11-29_06:52:54
Nov 29 06:52:54 compute-0 ceph-mgr[74948]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 06:52:54 compute-0 ceph-mgr[74948]: [balancer INFO root] do_upmap
Nov 29 06:52:54 compute-0 ceph-mgr[74948]: [balancer INFO root] pools ['volumes', 'backups', 'default.rgw.log', '.mgr', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'images', '.rgw.root', 'default.rgw.control', 'vms', 'default.rgw.meta']
Nov 29 06:52:54 compute-0 ceph-mgr[74948]: [balancer INFO root] prepared 0/10 changes
Nov 29 06:52:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:52:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:52:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:52:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:52:54 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:52:54 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:52:54 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:52:54.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:52:55 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:52:55 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:52:55 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:52:55.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:52:55 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1214: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:52:55 compute-0 podman[264467]: 2025-11-29 06:52:55.14893299 +0000 UTC m=+0.100413367 container health_status 81ea2bcb89266a0110a379c2083d8cc042460d4a35c8ed3bf349dd1083925000 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Nov 29 06:52:55 compute-0 podman[264468]: 2025-11-29 06:52:55.19671359 +0000 UTC m=+0.148319860 container health_status b3f42e9a710907b47913576d27471d163da731262c1464357cff24681ce600c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 29 06:52:56 compute-0 ceph-mon[74654]: pgmap v1214: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:52:56 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:52:56 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:52:56 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:52:56.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:52:57 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:52:57 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:52:57 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:52:57.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:52:57 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1215: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:52:57 compute-0 sudo[264512]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:52:57 compute-0 sudo[264512]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:52:57 compute-0 sudo[264512]: pam_unix(sudo:session): session closed for user root
Nov 29 06:52:57 compute-0 sudo[264538]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:52:57 compute-0 sudo[264538]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:52:57 compute-0 sudo[264538]: pam_unix(sudo:session): session closed for user root
Nov 29 06:52:58 compute-0 ceph-mon[74654]: pgmap v1215: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:52:58 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:52:58 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:52:58 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:52:58.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:52:59 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:52:59 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:52:59 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:52:59.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:52:59 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1216: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:52:59 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:52:59 compute-0 ceph-mon[74654]: pgmap v1216: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:53:00 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:53:00 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:53:00 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:53:00.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:53:01 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:53:01 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:53:01 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:53:01.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:53:01 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1217: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:53:02 compute-0 ceph-mon[74654]: pgmap v1217: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:53:02 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:53:02 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:53:02 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:53:02.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:53:03 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:53:03 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:53:03 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:53:03.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:53:03 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1218: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:53:03 compute-0 ceph-mon[74654]: pgmap v1218: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:53:04 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:53:04 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:53:04 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:53:04 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:53:04.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:53:05 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:53:05 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:53:05 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:53:05.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:53:05 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1219: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:53:06 compute-0 ceph-mon[74654]: pgmap v1219: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:53:06 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:53:06 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:53:06 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:53:06.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:53:07 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:53:07 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:53:07 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:53:07.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:53:07 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1220: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:53:08 compute-0 ceph-mon[74654]: pgmap v1220: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:53:08 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:53:08 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:53:08 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:53:08.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:53:09 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:53:09 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:53:09 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:53:09.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:53:09 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1221: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:53:09 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:53:10 compute-0 ceph-mon[74654]: pgmap v1221: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:53:10 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:53:10 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:53:10 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:53:10.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:53:11 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:53:11 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:53:11 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:53:11.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:53:11 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1222: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:53:11 compute-0 ceph-mon[74654]: pgmap v1222: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:53:12 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:53:12 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:53:12 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:53:12.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:53:13 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:53:13 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:53:13 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:53:13.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:53:13 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1223: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:53:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 06:53:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:53:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 06:53:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:53:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:53:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:53:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:53:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:53:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:53:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:53:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:53:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:53:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 06:53:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:53:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:53:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:53:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Nov 29 06:53:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:53:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 06:53:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:53:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:53:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:53:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 06:53:14 compute-0 podman[264573]: 2025-11-29 06:53:14.101335824 +0000 UTC m=+0.065931930 container health_status 843911ed0b6203707f0633a7e737420fbf54d55170a2d9cdc86db1752ff76af8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 29 06:53:14 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:53:14 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:53:14 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:53:14 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:53:14.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:53:14 compute-0 ceph-mon[74654]: pgmap v1223: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:53:15 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:53:15 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:53:15 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:53:15.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:53:15 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1224: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:53:15 compute-0 sshd-session[264571]: Invalid user desliga from 197.13.24.157 port 47748
Nov 29 06:53:15 compute-0 sshd-session[264571]: Received disconnect from 197.13.24.157 port 47748:11: Bye Bye [preauth]
Nov 29 06:53:15 compute-0 sshd-session[264571]: Disconnected from invalid user desliga 197.13.24.157 port 47748 [preauth]
Nov 29 06:53:16 compute-0 ceph-mon[74654]: pgmap v1224: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:53:16 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:53:16 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:53:16 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:53:16.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:53:17 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:53:17 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:53:17 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:53:17.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:53:17 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1225: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:53:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:53:17.245 157767 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 06:53:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:53:17.247 157767 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 06:53:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:53:17.248 157767 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 06:53:17 compute-0 sudo[264598]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:53:17 compute-0 sudo[264598]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:53:17 compute-0 sudo[264598]: pam_unix(sudo:session): session closed for user root
Nov 29 06:53:17 compute-0 sudo[264623]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:53:17 compute-0 sudo[264623]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:53:17 compute-0 sudo[264623]: pam_unix(sudo:session): session closed for user root
Nov 29 06:53:17 compute-0 sshd-session[264596]: Invalid user es from 162.214.92.14 port 44702
Nov 29 06:53:17 compute-0 sshd-session[264596]: Received disconnect from 162.214.92.14 port 44702:11: Bye Bye [preauth]
Nov 29 06:53:17 compute-0 sshd-session[264596]: Disconnected from invalid user es 162.214.92.14 port 44702 [preauth]
Nov 29 06:53:18 compute-0 nova_compute[251877]: 2025-11-29 06:53:18.295 251881 DEBUG nova.compute.manager [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 06:53:18 compute-0 nova_compute[251877]: 2025-11-29 06:53:18.296 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 06:53:18 compute-0 nova_compute[251877]: 2025-11-29 06:53:18.297 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 06:53:18 compute-0 nova_compute[251877]: 2025-11-29 06:53:18.297 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 06:53:18 compute-0 nova_compute[251877]: 2025-11-29 06:53:18.297 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 06:53:18 compute-0 nova_compute[251877]: 2025-11-29 06:53:18.298 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 06:53:18 compute-0 nova_compute[251877]: 2025-11-29 06:53:18.298 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 06:53:18 compute-0 nova_compute[251877]: 2025-11-29 06:53:18.299 251881 DEBUG nova.compute.manager [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 06:53:18 compute-0 nova_compute[251877]: 2025-11-29 06:53:18.299 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 06:53:18 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:53:18 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:53:18 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:53:18.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:53:18 compute-0 nova_compute[251877]: 2025-11-29 06:53:18.921 251881 WARNING oslo.service.loopingcall [-] Function 'nova.servicegroup.drivers.db.DbDriver._report_state' run outlasted interval by 52.88 sec
Nov 29 06:53:19 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1226: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:53:19 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:53:19 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:53:19 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:53:19.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:53:19 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:53:19 compute-0 ceph-mon[74654]: pgmap v1225: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:53:20 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:53:20 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:53:20 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:53:20.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:53:21 compute-0 ceph-mon[74654]: pgmap v1226: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:53:21 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1227: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:53:21 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:53:21 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:53:21 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:53:21.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:53:22 compute-0 ceph-mon[74654]: pgmap v1227: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:53:22 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:53:22 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:53:22 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:53:22.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:53:23 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1228: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:53:23 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:53:23 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:53:23 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:53:23.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:53:23 compute-0 nova_compute[251877]: 2025-11-29 06:53:23.180 251881 DEBUG oslo_concurrency.lockutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 06:53:23 compute-0 nova_compute[251877]: 2025-11-29 06:53:23.181 251881 DEBUG oslo_concurrency.lockutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 06:53:23 compute-0 nova_compute[251877]: 2025-11-29 06:53:23.181 251881 DEBUG oslo_concurrency.lockutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 06:53:23 compute-0 nova_compute[251877]: 2025-11-29 06:53:23.181 251881 DEBUG nova.compute.resource_tracker [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 06:53:23 compute-0 nova_compute[251877]: 2025-11-29 06:53:23.182 251881 DEBUG oslo_concurrency.processutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 06:53:23 compute-0 nova_compute[251877]: 2025-11-29 06:53:23.730 251881 DEBUG oslo_concurrency.processutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.548s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 06:53:24 compute-0 nova_compute[251877]: 2025-11-29 06:53:24.005 251881 WARNING nova.virt.libvirt.driver [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 06:53:24 compute-0 nova_compute[251877]: 2025-11-29 06:53:24.007 251881 DEBUG nova.compute.resource_tracker [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5181MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 06:53:24 compute-0 nova_compute[251877]: 2025-11-29 06:53:24.007 251881 DEBUG oslo_concurrency.lockutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 06:53:24 compute-0 nova_compute[251877]: 2025-11-29 06:53:24.007 251881 DEBUG oslo_concurrency.lockutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 06:53:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:53:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:53:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:53:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:53:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:53:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:53:24 compute-0 ceph-mon[74654]: pgmap v1228: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:53:24 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:53:24 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:53:24 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:53:24.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:53:24 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:53:25 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1229: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:53:25 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:53:25 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:53:25 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:53:25.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:53:26 compute-0 podman[264674]: 2025-11-29 06:53:26.10866493 +0000 UTC m=+0.070525588 container health_status 81ea2bcb89266a0110a379c2083d8cc042460d4a35c8ed3bf349dd1083925000 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 29 06:53:26 compute-0 podman[264675]: 2025-11-29 06:53:26.183033995 +0000 UTC m=+0.140208562 container health_status b3f42e9a710907b47913576d27471d163da731262c1464357cff24681ce600c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_id=ovn_controller, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 29 06:53:26 compute-0 ceph-mon[74654]: from='client.? 192.168.122.102:0/2185145600' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 06:53:26 compute-0 ceph-mon[74654]: from='client.? 192.168.122.101:0/586851354' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 06:53:26 compute-0 ceph-mon[74654]: from='client.? 192.168.122.100:0/2060863307' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 06:53:26 compute-0 ceph-mon[74654]: pgmap v1229: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:53:26 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:53:26 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:53:26 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:53:26.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:53:27 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1230: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:53:27 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:53:27 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:53:27 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:53:27.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:53:27 compute-0 ceph-mon[74654]: pgmap v1230: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:53:28 compute-0 sshd-session[264719]: Invalid user user2 from 103.31.39.143 port 38558
Nov 29 06:53:28 compute-0 sshd-session[264719]: Received disconnect from 103.31.39.143 port 38558:11: Bye Bye [preauth]
Nov 29 06:53:28 compute-0 sshd-session[264719]: Disconnected from invalid user user2 103.31.39.143 port 38558 [preauth]
Nov 29 06:53:28 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:53:28 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:53:28 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:53:28.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:53:29 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1231: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:53:29 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:53:29 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:53:29 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:53:29.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:53:29 compute-0 sshd-session[264722]: Received disconnect from 103.143.238.173 port 48432:11: Bye Bye [preauth]
Nov 29 06:53:29 compute-0 sshd-session[264722]: Disconnected from authenticating user root 103.143.238.173 port 48432 [preauth]
Nov 29 06:53:29 compute-0 nova_compute[251877]: 2025-11-29 06:53:29.176 251881 WARNING oslo.service.loopingcall [-] Function 'nova.servicegroup.drivers.db.DbDriver._report_state' run outlasted interval by 0.25 sec
Nov 29 06:53:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 06:53:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 06:53:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 06:53:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 06:53:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 06:53:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 06:53:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 06:53:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 06:53:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 06:53:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 06:53:29 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:53:29 compute-0 ceph-mon[74654]: pgmap v1231: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:53:30 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:53:30 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:53:30 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:53:30.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:53:30 compute-0 sshd-session[264725]: Invalid user bkp from 176.109.67.96 port 33530
Nov 29 06:53:31 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1232: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:53:31 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:53:31 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:53:31 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:53:31.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:53:31 compute-0 sshd-session[264725]: Received disconnect from 176.109.67.96 port 33530:11: Bye Bye [preauth]
Nov 29 06:53:31 compute-0 sshd-session[264725]: Disconnected from invalid user bkp 176.109.67.96 port 33530 [preauth]
Nov 29 06:53:32 compute-0 ceph-mon[74654]: pgmap v1232: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:53:32 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:53:32 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:53:32 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:53:32.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:53:33 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1233: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:53:33 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:53:33 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:53:33 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:53:33.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:53:33 compute-0 ceph-mon[74654]: pgmap v1233: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:53:34 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:53:34 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:53:34 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:53:34.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:53:34 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:53:35 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1234: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:53:35 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:53:35 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:53:35 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:53:35.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:53:36 compute-0 ceph-mon[74654]: pgmap v1234: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:53:36 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:53:36 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:53:36 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:53:36.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:53:37 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1235: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:53:37 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:53:37 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:53:37 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:53:37.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:53:37 compute-0 sudo[264734]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:53:37 compute-0 sudo[264734]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:53:37 compute-0 sudo[264734]: pam_unix(sudo:session): session closed for user root
Nov 29 06:53:37 compute-0 sudo[264759]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:53:37 compute-0 sudo[264759]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:53:37 compute-0 sudo[264759]: pam_unix(sudo:session): session closed for user root
Nov 29 06:53:38 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 06:53:38 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1865317670' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 06:53:38 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 06:53:38 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1865317670' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 06:53:38 compute-0 ceph-mon[74654]: pgmap v1235: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:53:38 compute-0 ceph-mon[74654]: from='client.? 192.168.122.10:0/1865317670' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 06:53:38 compute-0 ceph-mon[74654]: from='client.? 192.168.122.10:0/1865317670' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 06:53:38 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:53:38 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:53:38 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:53:38.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:53:39 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1236: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:53:39 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:53:39 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:53:39 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:53:39.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:53:39 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 06:53:39 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1260406766' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 06:53:39 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 06:53:39 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1260406766' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 06:53:39 compute-0 sshd-session[264784]: Invalid user ubuntu from 34.92.81.41 port 59092
Nov 29 06:53:39 compute-0 sshd-session[264784]: Received disconnect from 34.92.81.41 port 59092:11: Bye Bye [preauth]
Nov 29 06:53:39 compute-0 sshd-session[264784]: Disconnected from invalid user ubuntu 34.92.81.41 port 59092 [preauth]
Nov 29 06:53:39 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:53:40 compute-0 ceph-mon[74654]: pgmap v1236: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:53:40 compute-0 ceph-mon[74654]: from='client.? 192.168.122.10:0/1260406766' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 06:53:40 compute-0 ceph-mon[74654]: from='client.? 192.168.122.10:0/1260406766' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 06:53:40 compute-0 sshd-session[264787]: Invalid user janice from 118.193.39.127 port 55530
Nov 29 06:53:40 compute-0 sshd-session[264787]: Received disconnect from 118.193.39.127 port 55530:11: Bye Bye [preauth]
Nov 29 06:53:40 compute-0 sshd-session[264787]: Disconnected from invalid user janice 118.193.39.127 port 55530 [preauth]
Nov 29 06:53:40 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:53:40 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:53:40 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:53:40.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:53:41 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1237: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:53:41 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:53:41 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:53:41 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:53:41.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:53:41 compute-0 ceph-mon[74654]: pgmap v1237: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:53:42 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:53:42 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:53:42 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:53:42.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:53:43 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1238: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:53:43 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:53:43 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:53:43 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:53:43.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:53:43 compute-0 sshd-session[264790]: Received disconnect from 103.63.25.115 port 44232:11: Bye Bye [preauth]
Nov 29 06:53:43 compute-0 sshd-session[264790]: Disconnected from authenticating user root 103.63.25.115 port 44232 [preauth]
Nov 29 06:53:44 compute-0 ceph-mon[74654]: pgmap v1238: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:53:44 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:53:44 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:53:44 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:53:44.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:53:44 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:53:45 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1239: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:53:45 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:53:45 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:53:45 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:53:45.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:53:45 compute-0 podman[264793]: 2025-11-29 06:53:45.112977028 +0000 UTC m=+0.076749953 container health_status 843911ed0b6203707f0633a7e737420fbf54d55170a2d9cdc86db1752ff76af8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Nov 29 06:53:45 compute-0 ceph-mon[74654]: pgmap v1239: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:53:46 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:53:46 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:53:46 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:53:46.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:53:46 compute-0 sudo[264816]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:53:46 compute-0 sudo[264816]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:53:46 compute-0 sudo[264816]: pam_unix(sudo:session): session closed for user root
Nov 29 06:53:47 compute-0 sudo[264841]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:53:47 compute-0 sudo[264841]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:53:47 compute-0 sudo[264841]: pam_unix(sudo:session): session closed for user root
Nov 29 06:53:47 compute-0 sudo[264866]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:53:47 compute-0 sudo[264866]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:53:47 compute-0 sudo[264866]: pam_unix(sudo:session): session closed for user root
Nov 29 06:53:47 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1240: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:53:47 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:53:47 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:53:47 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:53:47.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:53:47 compute-0 sudo[264892]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 06:53:47 compute-0 sudo[264892]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:53:47 compute-0 ceph-mon[74654]: pgmap v1240: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:53:47 compute-0 sudo[264892]: pam_unix(sudo:session): session closed for user root
Nov 29 06:53:47 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 06:53:47 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:53:47 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 06:53:47 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 06:53:47 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 06:53:47 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:53:47 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev 74870220-ebec-41ea-8c18-54ac50238915 does not exist
Nov 29 06:53:47 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev d7485278-69d0-4a24-a944-149c4607bb60 does not exist
Nov 29 06:53:47 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev eda73ca2-2de4-4ebc-9f3e-ba5e88264c0d does not exist
Nov 29 06:53:47 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 06:53:47 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 06:53:47 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 06:53:47 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 06:53:47 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 06:53:47 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:53:48 compute-0 sudo[264948]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:53:48 compute-0 sudo[264948]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:53:48 compute-0 sudo[264948]: pam_unix(sudo:session): session closed for user root
Nov 29 06:53:48 compute-0 sudo[264973]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:53:48 compute-0 sudo[264973]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:53:48 compute-0 sudo[264973]: pam_unix(sudo:session): session closed for user root
Nov 29 06:53:48 compute-0 sudo[264998]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:53:48 compute-0 sudo[264998]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:53:48 compute-0 sudo[264998]: pam_unix(sudo:session): session closed for user root
Nov 29 06:53:48 compute-0 sudo[265023]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Nov 29 06:53:48 compute-0 sudo[265023]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:53:48 compute-0 podman[265091]: 2025-11-29 06:53:48.66924455 +0000 UTC m=+0.038525821 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:53:48 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:53:48 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:53:48 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:53:48.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:53:48 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:53:48 compute-0 podman[265091]: 2025-11-29 06:53:48.967836402 +0000 UTC m=+0.337117633 container create 75dfc07acf72a9ee802be5cbede4f194effd009e7399c6eeb4f6a214649abaf2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_tharp, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 29 06:53:48 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 06:53:48 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:53:48 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 06:53:48 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 06:53:48 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:53:49 compute-0 systemd[1]: Started libpod-conmon-75dfc07acf72a9ee802be5cbede4f194effd009e7399c6eeb4f6a214649abaf2.scope.
Nov 29 06:53:49 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1241: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:53:49 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:53:49 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:53:49 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:53:49 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:53:49.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:53:49 compute-0 podman[265091]: 2025-11-29 06:53:49.11257511 +0000 UTC m=+0.481856411 container init 75dfc07acf72a9ee802be5cbede4f194effd009e7399c6eeb4f6a214649abaf2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_tharp, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 06:53:49 compute-0 podman[265091]: 2025-11-29 06:53:49.124010211 +0000 UTC m=+0.493291412 container start 75dfc07acf72a9ee802be5cbede4f194effd009e7399c6eeb4f6a214649abaf2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_tharp, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 06:53:49 compute-0 podman[265091]: 2025-11-29 06:53:49.127978272 +0000 UTC m=+0.497259473 container attach 75dfc07acf72a9ee802be5cbede4f194effd009e7399c6eeb4f6a214649abaf2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_tharp, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 06:53:49 compute-0 cool_tharp[265108]: 167 167
Nov 29 06:53:49 compute-0 systemd[1]: libpod-75dfc07acf72a9ee802be5cbede4f194effd009e7399c6eeb4f6a214649abaf2.scope: Deactivated successfully.
Nov 29 06:53:49 compute-0 podman[265091]: 2025-11-29 06:53:49.133297951 +0000 UTC m=+0.502579152 container died 75dfc07acf72a9ee802be5cbede4f194effd009e7399c6eeb4f6a214649abaf2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_tharp, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 06:53:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-60d597a257115b476d4c5b0aa24a3c19f1697635a82ad308b2a56a14a458e8a9-merged.mount: Deactivated successfully.
Nov 29 06:53:49 compute-0 podman[265091]: 2025-11-29 06:53:49.183631862 +0000 UTC m=+0.552913053 container remove 75dfc07acf72a9ee802be5cbede4f194effd009e7399c6eeb4f6a214649abaf2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_tharp, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 29 06:53:49 compute-0 systemd[1]: libpod-conmon-75dfc07acf72a9ee802be5cbede4f194effd009e7399c6eeb4f6a214649abaf2.scope: Deactivated successfully.
Nov 29 06:53:49 compute-0 podman[265133]: 2025-11-29 06:53:49.372619831 +0000 UTC m=+0.061096294 container create 7e91ce6c9471c5e228b92e290f3ae6ecec9d7432aef3c8215f52290333dd0a18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_kirch, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 06:53:49 compute-0 podman[265133]: 2025-11-29 06:53:49.348748122 +0000 UTC m=+0.037224565 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:53:49 compute-0 systemd[1]: Started libpod-conmon-7e91ce6c9471c5e228b92e290f3ae6ecec9d7432aef3c8215f52290333dd0a18.scope.
Nov 29 06:53:49 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:53:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a43e9946745ee197cfb538c5347afc34d53a512211fd8d5b781b31fc8c87e109/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 06:53:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a43e9946745ee197cfb538c5347afc34d53a512211fd8d5b781b31fc8c87e109/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:53:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a43e9946745ee197cfb538c5347afc34d53a512211fd8d5b781b31fc8c87e109/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:53:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a43e9946745ee197cfb538c5347afc34d53a512211fd8d5b781b31fc8c87e109/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 06:53:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a43e9946745ee197cfb538c5347afc34d53a512211fd8d5b781b31fc8c87e109/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 06:53:49 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:53:49 compute-0 podman[265133]: 2025-11-29 06:53:49.855811279 +0000 UTC m=+0.544287802 container init 7e91ce6c9471c5e228b92e290f3ae6ecec9d7432aef3c8215f52290333dd0a18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_kirch, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 29 06:53:49 compute-0 podman[265133]: 2025-11-29 06:53:49.869291507 +0000 UTC m=+0.557767970 container start 7e91ce6c9471c5e228b92e290f3ae6ecec9d7432aef3c8215f52290333dd0a18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_kirch, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 29 06:53:50 compute-0 podman[265133]: 2025-11-29 06:53:50.098223126 +0000 UTC m=+0.786699649 container attach 7e91ce6c9471c5e228b92e290f3ae6ecec9d7432aef3c8215f52290333dd0a18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_kirch, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 06:53:50 compute-0 ceph-mon[74654]: pgmap v1241: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:53:50 compute-0 xenodochial_kirch[265150]: --> passed data devices: 0 physical, 1 LVM
Nov 29 06:53:50 compute-0 xenodochial_kirch[265150]: --> relative data size: 1.0
Nov 29 06:53:50 compute-0 xenodochial_kirch[265150]: --> All data devices are unavailable
Nov 29 06:53:50 compute-0 systemd[1]: libpod-7e91ce6c9471c5e228b92e290f3ae6ecec9d7432aef3c8215f52290333dd0a18.scope: Deactivated successfully.
Nov 29 06:53:50 compute-0 podman[265133]: 2025-11-29 06:53:50.679727191 +0000 UTC m=+1.368203624 container died 7e91ce6c9471c5e228b92e290f3ae6ecec9d7432aef3c8215f52290333dd0a18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_kirch, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 06:53:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-a43e9946745ee197cfb538c5347afc34d53a512211fd8d5b781b31fc8c87e109-merged.mount: Deactivated successfully.
Nov 29 06:53:50 compute-0 podman[265133]: 2025-11-29 06:53:50.741184654 +0000 UTC m=+1.429661087 container remove 7e91ce6c9471c5e228b92e290f3ae6ecec9d7432aef3c8215f52290333dd0a18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_kirch, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 29 06:53:50 compute-0 systemd[1]: libpod-conmon-7e91ce6c9471c5e228b92e290f3ae6ecec9d7432aef3c8215f52290333dd0a18.scope: Deactivated successfully.
Nov 29 06:53:50 compute-0 sudo[265023]: pam_unix(sudo:session): session closed for user root
Nov 29 06:53:50 compute-0 sudo[265179]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:53:50 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:53:50 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:53:50 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:53:50.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:53:50 compute-0 sudo[265179]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:53:50 compute-0 sudo[265179]: pam_unix(sudo:session): session closed for user root
Nov 29 06:53:50 compute-0 sudo[265204]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:53:50 compute-0 sudo[265204]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:53:50 compute-0 sudo[265204]: pam_unix(sudo:session): session closed for user root
Nov 29 06:53:50 compute-0 sudo[265229]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:53:50 compute-0 sudo[265229]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:53:50 compute-0 sudo[265229]: pam_unix(sudo:session): session closed for user root
Nov 29 06:53:51 compute-0 sudo[265254]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -- lvm list --format json
Nov 29 06:53:51 compute-0 sudo[265254]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:53:51 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1242: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:53:51 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:53:51 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:53:51 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:53:51.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:53:51 compute-0 podman[265321]: 2025-11-29 06:53:51.485568536 +0000 UTC m=+0.044917730 container create d4b8429e060e334da16efc53bc2bc64e72cbec3a2db2cadcc89b47c708d13d9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_bouman, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 29 06:53:51 compute-0 systemd[1]: Started libpod-conmon-d4b8429e060e334da16efc53bc2bc64e72cbec3a2db2cadcc89b47c708d13d9a.scope.
Nov 29 06:53:51 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:53:51 compute-0 podman[265321]: 2025-11-29 06:53:51.465327079 +0000 UTC m=+0.024676323 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:53:51 compute-0 podman[265321]: 2025-11-29 06:53:51.560841396 +0000 UTC m=+0.120190580 container init d4b8429e060e334da16efc53bc2bc64e72cbec3a2db2cadcc89b47c708d13d9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_bouman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 06:53:51 compute-0 podman[265321]: 2025-11-29 06:53:51.568252964 +0000 UTC m=+0.127602198 container start d4b8429e060e334da16efc53bc2bc64e72cbec3a2db2cadcc89b47c708d13d9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_bouman, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 29 06:53:51 compute-0 podman[265321]: 2025-11-29 06:53:51.572137623 +0000 UTC m=+0.131486837 container attach d4b8429e060e334da16efc53bc2bc64e72cbec3a2db2cadcc89b47c708d13d9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_bouman, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 06:53:51 compute-0 inspiring_bouman[265337]: 167 167
Nov 29 06:53:51 compute-0 systemd[1]: libpod-d4b8429e060e334da16efc53bc2bc64e72cbec3a2db2cadcc89b47c708d13d9a.scope: Deactivated successfully.
Nov 29 06:53:51 compute-0 podman[265321]: 2025-11-29 06:53:51.578698577 +0000 UTC m=+0.138047771 container died d4b8429e060e334da16efc53bc2bc64e72cbec3a2db2cadcc89b47c708d13d9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_bouman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 29 06:53:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-090e22c2823eb1a88ee9011518b105a6782830cca9479d3c82e085aec3ef5cff-merged.mount: Deactivated successfully.
Nov 29 06:53:51 compute-0 podman[265321]: 2025-11-29 06:53:51.620312263 +0000 UTC m=+0.179661457 container remove d4b8429e060e334da16efc53bc2bc64e72cbec3a2db2cadcc89b47c708d13d9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_bouman, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 06:53:51 compute-0 systemd[1]: libpod-conmon-d4b8429e060e334da16efc53bc2bc64e72cbec3a2db2cadcc89b47c708d13d9a.scope: Deactivated successfully.
Nov 29 06:53:51 compute-0 podman[265360]: 2025-11-29 06:53:51.786655218 +0000 UTC m=+0.037194654 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:53:52 compute-0 nova_compute[251877]: 2025-11-29 06:53:52.103 251881 DEBUG nova.compute.resource_tracker [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 06:53:52 compute-0 nova_compute[251877]: 2025-11-29 06:53:52.106 251881 DEBUG nova.compute.resource_tracker [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 06:53:52 compute-0 nova_compute[251877]: 2025-11-29 06:53:52.146 251881 DEBUG oslo_concurrency.processutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 06:53:52 compute-0 podman[265360]: 2025-11-29 06:53:52.285165295 +0000 UTC m=+0.535704771 container create 0c165333b515062c781cfcf388f5626df1eafe337d0cf687acb1bab546b944b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_chebyshev, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True)
Nov 29 06:53:52 compute-0 ceph-mon[74654]: pgmap v1242: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:53:52 compute-0 systemd[1]: Started libpod-conmon-0c165333b515062c781cfcf388f5626df1eafe337d0cf687acb1bab546b944b8.scope.
Nov 29 06:53:52 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:53:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6508d7e90fee19c06d6f68b9d59b563340f51147c7defbf014e7c414aee8327c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 06:53:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6508d7e90fee19c06d6f68b9d59b563340f51147c7defbf014e7c414aee8327c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:53:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6508d7e90fee19c06d6f68b9d59b563340f51147c7defbf014e7c414aee8327c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:53:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6508d7e90fee19c06d6f68b9d59b563340f51147c7defbf014e7c414aee8327c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 06:53:52 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:53:52 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:53:52 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:53:52.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:53:52 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 06:53:52 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/185560197' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 06:53:52 compute-0 nova_compute[251877]: 2025-11-29 06:53:52.874 251881 DEBUG oslo_concurrency.processutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.728s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 06:53:52 compute-0 nova_compute[251877]: 2025-11-29 06:53:52.881 251881 DEBUG nova.compute.provider_tree [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Inventory has not changed in ProviderTree for provider: 36ed0248-8d04-4532-95bb-daab89f12202 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 06:53:52 compute-0 podman[265360]: 2025-11-29 06:53:52.901388123 +0000 UTC m=+1.151927599 container init 0c165333b515062c781cfcf388f5626df1eafe337d0cf687acb1bab546b944b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_chebyshev, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 06:53:52 compute-0 podman[265360]: 2025-11-29 06:53:52.914295545 +0000 UTC m=+1.164834941 container start 0c165333b515062c781cfcf388f5626df1eafe337d0cf687acb1bab546b944b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_chebyshev, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 06:53:53 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1243: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:53:53 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:53:53 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:53:53 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:53:53.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:53:53 compute-0 podman[265360]: 2025-11-29 06:53:53.279833994 +0000 UTC m=+1.530373480 container attach 0c165333b515062c781cfcf388f5626df1eafe337d0cf687acb1bab546b944b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_chebyshev, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 06:53:53 compute-0 thirsty_chebyshev[265396]: {
Nov 29 06:53:53 compute-0 thirsty_chebyshev[265396]:     "1": [
Nov 29 06:53:53 compute-0 thirsty_chebyshev[265396]:         {
Nov 29 06:53:53 compute-0 thirsty_chebyshev[265396]:             "devices": [
Nov 29 06:53:53 compute-0 thirsty_chebyshev[265396]:                 "/dev/loop3"
Nov 29 06:53:53 compute-0 thirsty_chebyshev[265396]:             ],
Nov 29 06:53:53 compute-0 thirsty_chebyshev[265396]:             "lv_name": "ceph_lv0",
Nov 29 06:53:53 compute-0 thirsty_chebyshev[265396]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 06:53:53 compute-0 thirsty_chebyshev[265396]:             "lv_size": "7511998464",
Nov 29 06:53:53 compute-0 thirsty_chebyshev[265396]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=336ec58c-893b-528f-a0c1-6ed1196bc047,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=91f280f1-e534-4adc-bf70-98711580c2dd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 06:53:53 compute-0 thirsty_chebyshev[265396]:             "lv_uuid": "G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP",
Nov 29 06:53:53 compute-0 thirsty_chebyshev[265396]:             "name": "ceph_lv0",
Nov 29 06:53:53 compute-0 thirsty_chebyshev[265396]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 06:53:53 compute-0 thirsty_chebyshev[265396]:             "tags": {
Nov 29 06:53:53 compute-0 thirsty_chebyshev[265396]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 06:53:53 compute-0 thirsty_chebyshev[265396]:                 "ceph.block_uuid": "G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP",
Nov 29 06:53:53 compute-0 thirsty_chebyshev[265396]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 06:53:53 compute-0 thirsty_chebyshev[265396]:                 "ceph.cluster_fsid": "336ec58c-893b-528f-a0c1-6ed1196bc047",
Nov 29 06:53:53 compute-0 thirsty_chebyshev[265396]:                 "ceph.cluster_name": "ceph",
Nov 29 06:53:53 compute-0 thirsty_chebyshev[265396]:                 "ceph.crush_device_class": "",
Nov 29 06:53:53 compute-0 thirsty_chebyshev[265396]:                 "ceph.encrypted": "0",
Nov 29 06:53:53 compute-0 thirsty_chebyshev[265396]:                 "ceph.osd_fsid": "91f280f1-e534-4adc-bf70-98711580c2dd",
Nov 29 06:53:53 compute-0 thirsty_chebyshev[265396]:                 "ceph.osd_id": "1",
Nov 29 06:53:53 compute-0 thirsty_chebyshev[265396]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 06:53:53 compute-0 thirsty_chebyshev[265396]:                 "ceph.type": "block",
Nov 29 06:53:53 compute-0 thirsty_chebyshev[265396]:                 "ceph.vdo": "0"
Nov 29 06:53:53 compute-0 thirsty_chebyshev[265396]:             },
Nov 29 06:53:53 compute-0 thirsty_chebyshev[265396]:             "type": "block",
Nov 29 06:53:53 compute-0 thirsty_chebyshev[265396]:             "vg_name": "ceph_vg0"
Nov 29 06:53:53 compute-0 thirsty_chebyshev[265396]:         }
Nov 29 06:53:53 compute-0 thirsty_chebyshev[265396]:     ]
Nov 29 06:53:53 compute-0 thirsty_chebyshev[265396]: }
Nov 29 06:53:53 compute-0 systemd[1]: libpod-0c165333b515062c781cfcf388f5626df1eafe337d0cf687acb1bab546b944b8.scope: Deactivated successfully.
Nov 29 06:53:53 compute-0 podman[265360]: 2025-11-29 06:53:53.690579901 +0000 UTC m=+1.941119337 container died 0c165333b515062c781cfcf388f5626df1eafe337d0cf687acb1bab546b944b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_chebyshev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 06:53:54 compute-0 ceph-mon[74654]: from='client.? 192.168.122.102:0/3731896802' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 06:53:54 compute-0 ceph-mon[74654]: from='client.? 192.168.122.101:0/3803659993' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 06:53:54 compute-0 ceph-mon[74654]: from='client.? 192.168.122.100:0/185560197' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 06:53:54 compute-0 ceph-mon[74654]: pgmap v1243: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:53:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:53:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:53:54 compute-0 ceph-mgr[74948]: [balancer INFO root] Optimize plan auto_2025-11-29_06:53:54
Nov 29 06:53:54 compute-0 ceph-mgr[74948]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 06:53:54 compute-0 ceph-mgr[74948]: [balancer INFO root] do_upmap
Nov 29 06:53:54 compute-0 ceph-mgr[74948]: [balancer INFO root] pools ['images', '.rgw.root', 'volumes', 'default.rgw.log', 'cephfs.cephfs.meta', 'vms', 'cephfs.cephfs.data', 'backups', '.mgr', 'default.rgw.control', 'default.rgw.meta']
Nov 29 06:53:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:53:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:53:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:53:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:53:54 compute-0 ceph-mgr[74948]: [balancer INFO root] prepared 0/10 changes
Nov 29 06:53:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-6508d7e90fee19c06d6f68b9d59b563340f51147c7defbf014e7c414aee8327c-merged.mount: Deactivated successfully.
Nov 29 06:53:54 compute-0 podman[265360]: 2025-11-29 06:53:54.397628185 +0000 UTC m=+2.648167581 container remove 0c165333b515062c781cfcf388f5626df1eafe337d0cf687acb1bab546b944b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_chebyshev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 06:53:54 compute-0 sudo[265254]: pam_unix(sudo:session): session closed for user root
Nov 29 06:53:54 compute-0 systemd[1]: libpod-conmon-0c165333b515062c781cfcf388f5626df1eafe337d0cf687acb1bab546b944b8.scope: Deactivated successfully.
Nov 29 06:53:54 compute-0 nova_compute[251877]: 2025-11-29 06:53:54.474 251881 DEBUG nova.scheduler.client.report [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Inventory has not changed for provider 36ed0248-8d04-4532-95bb-daab89f12202 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 06:53:54 compute-0 nova_compute[251877]: 2025-11-29 06:53:54.477 251881 DEBUG nova.compute.resource_tracker [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 06:53:54 compute-0 nova_compute[251877]: 2025-11-29 06:53:54.478 251881 DEBUG oslo_concurrency.lockutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 30.471s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 06:53:54 compute-0 nova_compute[251877]: 2025-11-29 06:53:54.479 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 06:53:54 compute-0 nova_compute[251877]: 2025-11-29 06:53:54.480 251881 DEBUG nova.compute.manager [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 29 06:53:54 compute-0 sudo[265419]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:53:54 compute-0 sudo[265419]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:53:54 compute-0 sudo[265419]: pam_unix(sudo:session): session closed for user root
Nov 29 06:53:54 compute-0 sudo[265444]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:53:54 compute-0 sudo[265444]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:53:54 compute-0 sudo[265444]: pam_unix(sudo:session): session closed for user root
Nov 29 06:53:54 compute-0 sudo[265469]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:53:54 compute-0 sudo[265469]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:53:54 compute-0 sudo[265469]: pam_unix(sudo:session): session closed for user root
Nov 29 06:53:54 compute-0 sudo[265494]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -- raw list --format json
Nov 29 06:53:54 compute-0 sudo[265494]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:53:54 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:53:54 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:53:54 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:53:54.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:53:54 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:53:55 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1244: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:53:55 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:53:55 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:53:55 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:53:55.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:53:55 compute-0 podman[265559]: 2025-11-29 06:53:55.218619334 +0000 UTC m=+0.035691922 container create cff4cf1240ff526cac91a3cd62051dae93c09cecb321888c9d29a47db6b82e84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_solomon, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 06:53:55 compute-0 systemd[1]: Started libpod-conmon-cff4cf1240ff526cac91a3cd62051dae93c09cecb321888c9d29a47db6b82e84.scope.
Nov 29 06:53:55 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:53:55 compute-0 podman[265559]: 2025-11-29 06:53:55.279035678 +0000 UTC m=+0.096108286 container init cff4cf1240ff526cac91a3cd62051dae93c09cecb321888c9d29a47db6b82e84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_solomon, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 06:53:55 compute-0 podman[265559]: 2025-11-29 06:53:55.285251352 +0000 UTC m=+0.102323930 container start cff4cf1240ff526cac91a3cd62051dae93c09cecb321888c9d29a47db6b82e84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_solomon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 29 06:53:55 compute-0 podman[265559]: 2025-11-29 06:53:55.288508393 +0000 UTC m=+0.105581001 container attach cff4cf1240ff526cac91a3cd62051dae93c09cecb321888c9d29a47db6b82e84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_solomon, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 06:53:55 compute-0 objective_solomon[265575]: 167 167
Nov 29 06:53:55 compute-0 systemd[1]: libpod-cff4cf1240ff526cac91a3cd62051dae93c09cecb321888c9d29a47db6b82e84.scope: Deactivated successfully.
Nov 29 06:53:55 compute-0 conmon[265575]: conmon cff4cf1240ff526cac91 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-cff4cf1240ff526cac91a3cd62051dae93c09cecb321888c9d29a47db6b82e84.scope/container/memory.events
Nov 29 06:53:55 compute-0 podman[265559]: 2025-11-29 06:53:55.292419003 +0000 UTC m=+0.109491601 container died cff4cf1240ff526cac91a3cd62051dae93c09cecb321888c9d29a47db6b82e84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_solomon, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 06:53:55 compute-0 podman[265559]: 2025-11-29 06:53:55.203822429 +0000 UTC m=+0.020895027 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:53:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-bcaed3e20aca9184cd78bbf2a63e608ae49d32b4762b104d4e151d527404ae5f-merged.mount: Deactivated successfully.
Nov 29 06:53:55 compute-0 podman[265559]: 2025-11-29 06:53:55.539580223 +0000 UTC m=+0.356652801 container remove cff4cf1240ff526cac91a3cd62051dae93c09cecb321888c9d29a47db6b82e84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_solomon, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 29 06:53:55 compute-0 systemd[1]: libpod-conmon-cff4cf1240ff526cac91a3cd62051dae93c09cecb321888c9d29a47db6b82e84.scope: Deactivated successfully.
Nov 29 06:53:55 compute-0 podman[265601]: 2025-11-29 06:53:55.711919065 +0000 UTC m=+0.027508882 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:53:56 compute-0 podman[265601]: 2025-11-29 06:53:56.109828282 +0000 UTC m=+0.425418119 container create 72c80f2c6232a575cbd45a124e4e0d675ae2797f0afc41b5d4a6cd119272d7bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_lovelace, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 06:53:56 compute-0 systemd[1]: Started libpod-conmon-72c80f2c6232a575cbd45a124e4e0d675ae2797f0afc41b5d4a6cd119272d7bf.scope.
Nov 29 06:53:56 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:53:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ce90c13de5dca8120d789a5ec83300d5f4823fbb2e28728959be5ee6f86369b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 06:53:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ce90c13de5dca8120d789a5ec83300d5f4823fbb2e28728959be5ee6f86369b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:53:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ce90c13de5dca8120d789a5ec83300d5f4823fbb2e28728959be5ee6f86369b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:53:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ce90c13de5dca8120d789a5ec83300d5f4823fbb2e28728959be5ee6f86369b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 06:53:56 compute-0 ceph-mon[74654]: pgmap v1244: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:53:56 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:53:56 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:53:56 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:53:56.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:53:56 compute-0 podman[265601]: 2025-11-29 06:53:56.921464819 +0000 UTC m=+1.237054646 container init 72c80f2c6232a575cbd45a124e4e0d675ae2797f0afc41b5d4a6cd119272d7bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_lovelace, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 06:53:56 compute-0 podman[265601]: 2025-11-29 06:53:56.935631466 +0000 UTC m=+1.251221293 container start 72c80f2c6232a575cbd45a124e4e0d675ae2797f0afc41b5d4a6cd119272d7bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_lovelace, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 29 06:53:57 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1245: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:53:57 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:53:57 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:53:57 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:53:57.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:53:57 compute-0 podman[265601]: 2025-11-29 06:53:57.186915042 +0000 UTC m=+1.502504909 container attach 72c80f2c6232a575cbd45a124e4e0d675ae2797f0afc41b5d4a6cd119272d7bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_lovelace, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 29 06:53:57 compute-0 podman[265620]: 2025-11-29 06:53:57.245239617 +0000 UTC m=+0.732896760 container health_status 81ea2bcb89266a0110a379c2083d8cc042460d4a35c8ed3bf349dd1083925000 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125)
Nov 29 06:53:57 compute-0 podman[265621]: 2025-11-29 06:53:57.286825463 +0000 UTC m=+0.775242228 container health_status b3f42e9a710907b47913576d27471d163da731262c1464357cff24681ce600c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3)
Nov 29 06:53:57 compute-0 sudo[265670]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:53:57 compute-0 sudo[265670]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:53:57 compute-0 sudo[265670]: pam_unix(sudo:session): session closed for user root
Nov 29 06:53:57 compute-0 sudo[265701]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:53:57 compute-0 sudo[265701]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:53:57 compute-0 sudo[265701]: pam_unix(sudo:session): session closed for user root
Nov 29 06:53:57 compute-0 frosty_lovelace[265618]: {
Nov 29 06:53:57 compute-0 frosty_lovelace[265618]:     "91f280f1-e534-4adc-bf70-98711580c2dd": {
Nov 29 06:53:57 compute-0 frosty_lovelace[265618]:         "ceph_fsid": "336ec58c-893b-528f-a0c1-6ed1196bc047",
Nov 29 06:53:57 compute-0 frosty_lovelace[265618]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 06:53:57 compute-0 frosty_lovelace[265618]:         "osd_id": 1,
Nov 29 06:53:57 compute-0 frosty_lovelace[265618]:         "osd_uuid": "91f280f1-e534-4adc-bf70-98711580c2dd",
Nov 29 06:53:57 compute-0 frosty_lovelace[265618]:         "type": "bluestore"
Nov 29 06:53:57 compute-0 frosty_lovelace[265618]:     }
Nov 29 06:53:57 compute-0 frosty_lovelace[265618]: }
Nov 29 06:53:57 compute-0 systemd[1]: libpod-72c80f2c6232a575cbd45a124e4e0d675ae2797f0afc41b5d4a6cd119272d7bf.scope: Deactivated successfully.
Nov 29 06:53:57 compute-0 podman[265601]: 2025-11-29 06:53:57.865088747 +0000 UTC m=+2.180678604 container died 72c80f2c6232a575cbd45a124e4e0d675ae2797f0afc41b5d4a6cd119272d7bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_lovelace, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 29 06:53:58 compute-0 ceph-mon[74654]: pgmap v1245: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:53:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-6ce90c13de5dca8120d789a5ec83300d5f4823fbb2e28728959be5ee6f86369b-merged.mount: Deactivated successfully.
Nov 29 06:53:58 compute-0 podman[265601]: 2025-11-29 06:53:58.639041309 +0000 UTC m=+2.954631136 container remove 72c80f2c6232a575cbd45a124e4e0d675ae2797f0afc41b5d4a6cd119272d7bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_lovelace, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 29 06:53:58 compute-0 systemd[1]: libpod-conmon-72c80f2c6232a575cbd45a124e4e0d675ae2797f0afc41b5d4a6cd119272d7bf.scope: Deactivated successfully.
Nov 29 06:53:58 compute-0 sudo[265494]: pam_unix(sudo:session): session closed for user root
Nov 29 06:53:58 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 06:53:58 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:53:58 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 06:53:58 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:53:58 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev 9134e5b1-3cc7-478b-a7f4-4568d6f2d22d does not exist
Nov 29 06:53:58 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev edec7791-c309-493b-8e3d-b167a88696fe does not exist
Nov 29 06:53:58 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev 415624cf-b660-4e64-a9c9-b87976fe1667 does not exist
Nov 29 06:53:58 compute-0 sudo[265748]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:53:58 compute-0 sudo[265748]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:53:58 compute-0 sudo[265748]: pam_unix(sudo:session): session closed for user root
Nov 29 06:53:58 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:53:58 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:53:58 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:53:58.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:53:58 compute-0 sudo[265773]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 06:53:58 compute-0 sudo[265773]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:53:58 compute-0 sudo[265773]: pam_unix(sudo:session): session closed for user root
Nov 29 06:53:59 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1246: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:53:59 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:53:59 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:53:59 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:53:59.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:53:59 compute-0 sshd-session[265734]: Invalid user jose from 27.112.78.245 port 59276
Nov 29 06:53:59 compute-0 sshd-session[265734]: Received disconnect from 27.112.78.245 port 59276:11: Bye Bye [preauth]
Nov 29 06:53:59 compute-0 sshd-session[265734]: Disconnected from invalid user jose 27.112.78.245 port 59276 [preauth]
Nov 29 06:53:59 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:53:59 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:53:59 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:53:59 compute-0 ceph-mon[74654]: pgmap v1246: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:54:00 compute-0 sshd-session[265799]: Invalid user tidb from 193.163.72.91 port 34230
Nov 29 06:54:00 compute-0 sshd-session[265799]: Received disconnect from 193.163.72.91 port 34230:11: Bye Bye [preauth]
Nov 29 06:54:00 compute-0 sshd-session[265799]: Disconnected from invalid user tidb 193.163.72.91 port 34230 [preauth]
Nov 29 06:54:00 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:54:00 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:54:00 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:54:00.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:54:01 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1247: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:54:01 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:54:01 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:54:01 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:54:01.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:54:02 compute-0 ceph-mon[74654]: pgmap v1247: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:54:02 compute-0 nova_compute[251877]: 2025-11-29 06:54:02.643 251881 WARNING oslo.service.loopingcall [-] Function 'nova.servicegroup.drivers.db.DbDriver._report_state' run outlasted interval by 13.47 sec
Nov 29 06:54:02 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:54:02 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:54:02 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:54:02.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:54:03 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1248: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:54:03 compute-0 nova_compute[251877]: 2025-11-29 06:54:03.109 251881 DEBUG nova.compute.manager [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 29 06:54:03 compute-0 nova_compute[251877]: 2025-11-29 06:54:03.109 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 06:54:03 compute-0 nova_compute[251877]: 2025-11-29 06:54:03.109 251881 DEBUG nova.compute.manager [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 29 06:54:03 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:54:03 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:54:03 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:54:03.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:54:03 compute-0 ceph-mon[74654]: from='client.? 192.168.122.10:0/972693801' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 06:54:03 compute-0 ceph-mon[74654]: from='client.? 192.168.122.10:0/972693801' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 06:54:04 compute-0 ceph-mon[74654]: pgmap v1248: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:54:04 compute-0 nova_compute[251877]: 2025-11-29 06:54:04.481 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 06:54:04 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:54:04 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:54:04 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:54:04.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:54:04 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:54:05 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1249: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:54:05 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:54:05 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:54:05 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:54:05.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:54:05 compute-0 ceph-mon[74654]: pgmap v1249: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:54:06 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:54:06 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:54:06 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:54:06.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:54:07 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1250: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:54:07 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:54:07 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:54:07 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:54:07.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:54:08 compute-0 ceph-mon[74654]: pgmap v1250: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:54:08 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:54:08 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:54:08 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:54:08.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:54:09 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1251: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:54:09 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:54:09 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:54:09 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:54:09.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:54:09 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:54:10 compute-0 ceph-mon[74654]: pgmap v1251: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:54:10 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:54:10 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:54:10 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:54:10.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:54:11 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1252: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:54:11 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:54:11 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:54:11 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:54:11.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:54:12 compute-0 ceph-mon[74654]: pgmap v1252: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:54:12 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:54:12 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:54:12 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:54:12.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:54:13 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1253: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:54:13 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:54:13 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:54:13 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:54:13.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:54:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 06:54:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:54:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 06:54:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:54:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:54:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:54:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:54:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:54:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:54:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:54:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:54:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:54:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 06:54:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:54:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:54:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:54:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Nov 29 06:54:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:54:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 06:54:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:54:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:54:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:54:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 06:54:13 compute-0 ceph-mon[74654]: pgmap v1253: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:54:14 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:54:14 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:54:14 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:54:14 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:54:14.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:54:15 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1254: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:54:15 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:54:15 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:54:15 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:54:15.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:54:16 compute-0 podman[265809]: 2025-11-29 06:54:16.123266718 +0000 UTC m=+0.080054805 container health_status 843911ed0b6203707f0633a7e737420fbf54d55170a2d9cdc86db1752ff76af8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 29 06:54:16 compute-0 ceph-mon[74654]: pgmap v1254: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:54:16 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:54:16 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:54:16 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:54:16.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:54:17 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1255: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:54:17 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:54:17 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:54:17 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:54:17.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:54:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:54:17.246 157767 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 06:54:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:54:17.247 157767 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 06:54:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:54:17.248 157767 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 06:54:17 compute-0 ceph-mon[74654]: pgmap v1255: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:54:17 compute-0 sudo[265832]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:54:17 compute-0 sudo[265832]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:54:17 compute-0 sudo[265832]: pam_unix(sudo:session): session closed for user root
Nov 29 06:54:17 compute-0 sudo[265857]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:54:17 compute-0 sudo[265857]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:54:17 compute-0 sudo[265857]: pam_unix(sudo:session): session closed for user root
Nov 29 06:54:18 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:54:18 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:54:18 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:54:18.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:54:19 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1256: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:54:19 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:54:19 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:54:19 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:54:19.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:54:19 compute-0 ceph-mon[74654]: pgmap v1256: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:54:19 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:54:20 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:54:20 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:54:20 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:54:20.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:54:21 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1257: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:54:21 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:54:21 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:54:21 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:54:21.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:54:22 compute-0 ceph-mon[74654]: pgmap v1257: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:54:22 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:54:22 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:54:22 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:54:22.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:54:23 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1258: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:54:23 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:54:23 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:54:23 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:54:23.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:54:23 compute-0 ceph-mon[74654]: pgmap v1258: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:54:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:54:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:54:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:54:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:54:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:54:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:54:24 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:54:24 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:54:24 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:54:24 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:54:24.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:54:25 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1259: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:54:25 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:54:25 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:54:25 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:54:25.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:54:25 compute-0 nova_compute[251877]: 2025-11-29 06:54:25.280 251881 WARNING oslo.service.loopingcall [-] Function 'nova.servicegroup.drivers.db.DbDriver._report_state' run outlasted interval by 2.63 sec
Nov 29 06:54:25 compute-0 ceph-mon[74654]: pgmap v1259: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:54:26 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:54:26 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:54:26 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:54:26.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:54:27 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1260: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:54:27 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:54:27 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:54:27 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:54:27.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:54:27 compute-0 sshd-session[265888]: Received disconnect from 49.247.35.31 port 50118:11: Bye Bye [preauth]
Nov 29 06:54:27 compute-0 sshd-session[265888]: Disconnected from authenticating user root 49.247.35.31 port 50118 [preauth]
Nov 29 06:54:27 compute-0 sshd-session[265890]: Received disconnect from 197.13.24.157 port 60820:11: Bye Bye [preauth]
Nov 29 06:54:27 compute-0 sshd-session[265890]: Disconnected from authenticating user root 197.13.24.157 port 60820 [preauth]
Nov 29 06:54:28 compute-0 podman[265894]: 2025-11-29 06:54:28.130506351 +0000 UTC m=+0.089851981 container health_status b3f42e9a710907b47913576d27471d163da731262c1464357cff24681ce600c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller)
Nov 29 06:54:28 compute-0 podman[265893]: 2025-11-29 06:54:28.146649413 +0000 UTC m=+0.099496301 container health_status 81ea2bcb89266a0110a379c2083d8cc042460d4a35c8ed3bf349dd1083925000 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 29 06:54:28 compute-0 ceph-mon[74654]: pgmap v1260: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:54:28 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:54:28 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:54:28 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:54:28.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:54:28 compute-0 sshd-session[265935]: Invalid user aj from 162.214.92.14 port 43858
Nov 29 06:54:29 compute-0 sshd-session[265935]: Received disconnect from 162.214.92.14 port 43858:11: Bye Bye [preauth]
Nov 29 06:54:29 compute-0 sshd-session[265935]: Disconnected from invalid user aj 162.214.92.14 port 43858 [preauth]
Nov 29 06:54:29 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1261: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:54:29 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:54:29 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:54:29 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:54:29.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:54:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 06:54:29 compute-0 ceph-mon[74654]: pgmap v1261: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:54:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 06:54:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 06:54:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 06:54:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 06:54:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 06:54:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 06:54:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 06:54:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 06:54:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 06:54:29 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:54:30 compute-0 sshd-session[265885]: Connection closed by 101.47.163.116 port 38910 [preauth]
Nov 29 06:54:30 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:54:30 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:54:30 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:54:30.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:54:31 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1262: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:54:31 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:54:31 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:54:31 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:54:31.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:54:32 compute-0 ceph-mon[74654]: pgmap v1262: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:54:32 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:54:32 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:54:32 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:54:32.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:54:33 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1263: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:54:33 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:54:33 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:54:33 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:54:33.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:54:33 compute-0 ceph-mon[74654]: pgmap v1263: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:54:34 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:54:34 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:54:34 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:54:34 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:54:34.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:54:35 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1264: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:54:35 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:54:35 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:54:35 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:54:35.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:54:35 compute-0 sshd-session[265941]: Invalid user desliga from 103.143.238.173 port 45636
Nov 29 06:54:35 compute-0 ceph-mon[74654]: pgmap v1264: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:54:35 compute-0 sshd-session[265941]: Received disconnect from 103.143.238.173 port 45636:11: Bye Bye [preauth]
Nov 29 06:54:35 compute-0 sshd-session[265941]: Disconnected from invalid user desliga 103.143.238.173 port 45636 [preauth]
Nov 29 06:54:36 compute-0 ceph-mon[74654]: from='client.? 192.168.122.10:0/3604148447' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 06:54:36 compute-0 ceph-mon[74654]: from='client.? 192.168.122.10:0/3604148447' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 06:54:36 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:54:36 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:54:36 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:54:36.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:54:37 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1265: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:54:37 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:54:37 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:54:37 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:54:37.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:54:38 compute-0 sudo[265944]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:54:38 compute-0 sudo[265944]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:54:38 compute-0 sudo[265944]: pam_unix(sudo:session): session closed for user root
Nov 29 06:54:38 compute-0 sudo[265969]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:54:38 compute-0 sudo[265969]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:54:38 compute-0 sudo[265969]: pam_unix(sudo:session): session closed for user root
Nov 29 06:54:38 compute-0 ceph-mon[74654]: pgmap v1265: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:54:38 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:54:38 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:54:38 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:54:38.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:54:39 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1266: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:54:39 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:54:39 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:54:39 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:54:39.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:54:39 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:54:40 compute-0 ceph-mon[74654]: pgmap v1266: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:54:40 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:54:40 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:54:40 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:54:40.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:54:40 compute-0 nova_compute[251877]: 2025-11-29 06:54:40.957 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 06:54:40 compute-0 nova_compute[251877]: 2025-11-29 06:54:40.958 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 06:54:41 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1267: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:54:41 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:54:41 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:54:41 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:54:41.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:54:41 compute-0 ceph-mon[74654]: pgmap v1267: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:54:42 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:54:42 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:54:42 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:54:42.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:54:43 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1268: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:54:43 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:54:43 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:54:43 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:54:43.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:54:43 compute-0 ceph-mon[74654]: pgmap v1268: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:54:44 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:54:44 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:54:44 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:54:44 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:54:44.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:54:45 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1269: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:54:45 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:54:45 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:54:45 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:54:45.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:54:45 compute-0 ceph-mon[74654]: pgmap v1269: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:54:46 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:54:46 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:54:46 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:54:46.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:54:47 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1270: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:54:47 compute-0 podman[265998]: 2025-11-29 06:54:47.138452673 +0000 UTC m=+0.097160495 container health_status 843911ed0b6203707f0633a7e737420fbf54d55170a2d9cdc86db1752ff76af8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 29 06:54:47 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:54:47 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:54:47 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:54:47.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:54:47 compute-0 ceph-mon[74654]: pgmap v1270: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:54:48 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:54:48 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:54:48 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:54:48.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:54:49 compute-0 nova_compute[251877]: 2025-11-29 06:54:49.073 251881 WARNING oslo.service.loopingcall [-] Function 'nova.servicegroup.drivers.db.DbDriver._report_state' run outlasted interval by 3.80 sec
Nov 29 06:54:49 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1271: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:54:49 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:54:49 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:54:49 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:54:49.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:54:49 compute-0 nova_compute[251877]: 2025-11-29 06:54:49.364 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 06:54:49 compute-0 nova_compute[251877]: 2025-11-29 06:54:49.364 251881 DEBUG nova.compute.manager [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 06:54:49 compute-0 nova_compute[251877]: 2025-11-29 06:54:49.365 251881 DEBUG nova.compute.manager [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 06:54:49 compute-0 ceph-mon[74654]: pgmap v1271: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:54:49 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:54:50 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:54:50 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:54:50 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:54:50.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:54:51 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1272: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:54:51 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:54:51 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:54:51 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:54:51.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:54:52 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:54:52 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:54:52 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:54:52.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:54:53 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1273: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:54:53 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:54:53 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:54:53 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:54:53.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:54:53 compute-0 ceph-mon[74654]: pgmap v1272: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:54:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:54:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:54:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:54:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:54:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:54:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:54:54 compute-0 ceph-mgr[74948]: [balancer INFO root] Optimize plan auto_2025-11-29_06:54:54
Nov 29 06:54:54 compute-0 ceph-mgr[74948]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 06:54:54 compute-0 ceph-mgr[74948]: [balancer INFO root] do_upmap
Nov 29 06:54:54 compute-0 ceph-mgr[74948]: [balancer INFO root] pools ['vms', 'default.rgw.meta', 'volumes', 'default.rgw.log', 'default.rgw.control', '.mgr', 'backups', 'cephfs.cephfs.data', 'images', 'cephfs.cephfs.meta', '.rgw.root']
Nov 29 06:54:54 compute-0 ceph-mgr[74948]: [balancer INFO root] prepared 0/10 changes
Nov 29 06:54:54 compute-0 sshd-session[266024]: Invalid user user10 from 176.109.67.96 port 49938
Nov 29 06:54:54 compute-0 sshd-session[266024]: Received disconnect from 176.109.67.96 port 49938:11: Bye Bye [preauth]
Nov 29 06:54:54 compute-0 sshd-session[266024]: Disconnected from invalid user user10 176.109.67.96 port 49938 [preauth]
Nov 29 06:54:54 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:54:54 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:54:54 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:54:54 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:54:54.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:54:54 compute-0 ceph-mon[74654]: pgmap v1273: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:54:55 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1274: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:54:55 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:54:55 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:54:55 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:54:55.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:54:56 compute-0 sshd-session[266026]: Invalid user es from 118.193.39.127 port 53766
Nov 29 06:54:56 compute-0 ceph-mon[74654]: pgmap v1274: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:54:56 compute-0 sshd-session[266026]: Received disconnect from 118.193.39.127 port 53766:11: Bye Bye [preauth]
Nov 29 06:54:56 compute-0 sshd-session[266026]: Disconnected from invalid user es 118.193.39.127 port 53766 [preauth]
Nov 29 06:54:56 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:54:56 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:54:56 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:54:56.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:54:57 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1275: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:54:57 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:54:57 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:54:57 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:54:57.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:54:57 compute-0 ceph-mon[74654]: pgmap v1275: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:54:58 compute-0 sudo[266030]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:54:58 compute-0 sudo[266030]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:54:58 compute-0 sudo[266030]: pam_unix(sudo:session): session closed for user root
Nov 29 06:54:58 compute-0 sudo[266067]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:54:58 compute-0 sudo[266067]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:54:58 compute-0 sudo[266067]: pam_unix(sudo:session): session closed for user root
Nov 29 06:54:58 compute-0 podman[266054]: 2025-11-29 06:54:58.376215861 +0000 UTC m=+0.077030921 container health_status 81ea2bcb89266a0110a379c2083d8cc042460d4a35c8ed3bf349dd1083925000 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible)
Nov 29 06:54:58 compute-0 podman[266055]: 2025-11-29 06:54:58.400605875 +0000 UTC m=+0.097751632 container health_status b3f42e9a710907b47913576d27471d163da731262c1464357cff24681ce600c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 29 06:54:58 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:54:58 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:54:58 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:54:58.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:54:59 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1276: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:54:59 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:54:59 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:54:59 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:54:59.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:54:59 compute-0 sudo[266128]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:54:59 compute-0 sudo[266128]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:54:59 compute-0 sudo[266128]: pam_unix(sudo:session): session closed for user root
Nov 29 06:54:59 compute-0 sudo[266153]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:54:59 compute-0 sudo[266153]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:54:59 compute-0 sudo[266153]: pam_unix(sudo:session): session closed for user root
Nov 29 06:54:59 compute-0 sudo[266178]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:54:59 compute-0 sudo[266178]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:54:59 compute-0 sudo[266178]: pam_unix(sudo:session): session closed for user root
Nov 29 06:54:59 compute-0 sudo[266203]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 06:54:59 compute-0 sudo[266203]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:54:59 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:55:00 compute-0 sudo[266203]: pam_unix(sudo:session): session closed for user root
Nov 29 06:55:00 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 06:55:00 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:55:00 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 06:55:00 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 06:55:00 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 06:55:00 compute-0 ceph-mon[74654]: pgmap v1276: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:55:00 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:55:00 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev 5c087e73-0188-421e-b770-445446019298 does not exist
Nov 29 06:55:00 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev 31f5b2f4-53e1-4661-9fb8-b6483ded1400 does not exist
Nov 29 06:55:00 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev 3b417d2d-f933-4e90-b768-2c81bc7e332e does not exist
Nov 29 06:55:00 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 06:55:00 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 06:55:00 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 06:55:00 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 06:55:00 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 06:55:00 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:55:00 compute-0 sudo[266261]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:55:00 compute-0 sudo[266261]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:55:00 compute-0 sudo[266261]: pam_unix(sudo:session): session closed for user root
Nov 29 06:55:00 compute-0 sudo[266286]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:55:00 compute-0 sshd-session[266228]: Invalid user scanner from 34.92.81.41 port 33226
Nov 29 06:55:00 compute-0 sudo[266286]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:55:00 compute-0 sudo[266286]: pam_unix(sudo:session): session closed for user root
Nov 29 06:55:00 compute-0 sudo[266311]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:55:00 compute-0 sudo[266311]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:55:00 compute-0 sudo[266311]: pam_unix(sudo:session): session closed for user root
Nov 29 06:55:00 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:55:00 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:55:00 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:55:00.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:55:00 compute-0 sudo[266336]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Nov 29 06:55:00 compute-0 sudo[266336]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:55:01 compute-0 sshd-session[266228]: Received disconnect from 34.92.81.41 port 33226:11: Bye Bye [preauth]
Nov 29 06:55:01 compute-0 sshd-session[266228]: Disconnected from invalid user scanner 34.92.81.41 port 33226 [preauth]
Nov 29 06:55:01 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1277: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:55:01 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:55:01 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:55:01 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:55:01.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:55:01 compute-0 podman[266405]: 2025-11-29 06:55:01.436191406 +0000 UTC m=+0.043110809 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:55:01 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:55:01 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 06:55:01 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:55:01 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 06:55:01 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 06:55:01 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:55:02 compute-0 podman[266405]: 2025-11-29 06:55:02.070457461 +0000 UTC m=+0.677376814 container create 19667375615bd59a41ed3ff11672c19407da79aeb4999e5029c46782300b8b92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_heyrovsky, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 06:55:02 compute-0 systemd[1]: Started libpod-conmon-19667375615bd59a41ed3ff11672c19407da79aeb4999e5029c46782300b8b92.scope.
Nov 29 06:55:02 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:55:02 compute-0 podman[266405]: 2025-11-29 06:55:02.652461349 +0000 UTC m=+1.259380742 container init 19667375615bd59a41ed3ff11672c19407da79aeb4999e5029c46782300b8b92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_heyrovsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 29 06:55:02 compute-0 podman[266405]: 2025-11-29 06:55:02.664496736 +0000 UTC m=+1.271416089 container start 19667375615bd59a41ed3ff11672c19407da79aeb4999e5029c46782300b8b92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_heyrovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 06:55:02 compute-0 dazzling_heyrovsky[266422]: 167 167
Nov 29 06:55:02 compute-0 systemd[1]: libpod-19667375615bd59a41ed3ff11672c19407da79aeb4999e5029c46782300b8b92.scope: Deactivated successfully.
Nov 29 06:55:02 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:55:02 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:55:02 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:55:02.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:55:03 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1278: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:55:03 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:55:03 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:55:03 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:55:03.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:55:03 compute-0 ceph-mon[74654]: pgmap v1277: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:55:03 compute-0 podman[266405]: 2025-11-29 06:55:03.317212046 +0000 UTC m=+1.924131449 container attach 19667375615bd59a41ed3ff11672c19407da79aeb4999e5029c46782300b8b92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_heyrovsky, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 29 06:55:03 compute-0 podman[266405]: 2025-11-29 06:55:03.31876976 +0000 UTC m=+1.925689123 container died 19667375615bd59a41ed3ff11672c19407da79aeb4999e5029c46782300b8b92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_heyrovsky, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 29 06:55:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-9cc6ad4db05974213ff2f4c91fa34c81b3338dc7c747344d2bacf058b4492625-merged.mount: Deactivated successfully.
Nov 29 06:55:04 compute-0 podman[266405]: 2025-11-29 06:55:04.34844431 +0000 UTC m=+2.955363653 container remove 19667375615bd59a41ed3ff11672c19407da79aeb4999e5029c46782300b8b92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_heyrovsky, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 29 06:55:04 compute-0 ceph-mon[74654]: from='client.? 192.168.122.102:0/2982005399' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 06:55:04 compute-0 ceph-mon[74654]: pgmap v1278: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:55:04 compute-0 systemd[1]: libpod-conmon-19667375615bd59a41ed3ff11672c19407da79aeb4999e5029c46782300b8b92.scope: Deactivated successfully.
Nov 29 06:55:04 compute-0 podman[266448]: 2025-11-29 06:55:04.494976919 +0000 UTC m=+0.023711896 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:55:04 compute-0 podman[266448]: 2025-11-29 06:55:04.668627138 +0000 UTC m=+0.197362075 container create c377d9274868bf0036e726523b0caae924b73e5f5c12fab35e1bbf2c8da9f6bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_goldwasser, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 06:55:04 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:55:04 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:55:04 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:55:04 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:55:04.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:55:05 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1279: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:55:05 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:55:05 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:55:05 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:55:05.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:55:05 compute-0 systemd[1]: Started libpod-conmon-c377d9274868bf0036e726523b0caae924b73e5f5c12fab35e1bbf2c8da9f6bd.scope.
Nov 29 06:55:05 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:55:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21e7b691f5284613b52a9ba53b4c4a6763224119c250b4f27bd020b14f1567d1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 06:55:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21e7b691f5284613b52a9ba53b4c4a6763224119c250b4f27bd020b14f1567d1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:55:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21e7b691f5284613b52a9ba53b4c4a6763224119c250b4f27bd020b14f1567d1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:55:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21e7b691f5284613b52a9ba53b4c4a6763224119c250b4f27bd020b14f1567d1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 06:55:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21e7b691f5284613b52a9ba53b4c4a6763224119c250b4f27bd020b14f1567d1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 06:55:05 compute-0 podman[266448]: 2025-11-29 06:55:05.731699235 +0000 UTC m=+1.260434222 container init c377d9274868bf0036e726523b0caae924b73e5f5c12fab35e1bbf2c8da9f6bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_goldwasser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 29 06:55:05 compute-0 ceph-mon[74654]: pgmap v1279: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:55:05 compute-0 podman[266448]: 2025-11-29 06:55:05.742663062 +0000 UTC m=+1.271397989 container start c377d9274868bf0036e726523b0caae924b73e5f5c12fab35e1bbf2c8da9f6bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_goldwasser, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 06:55:05 compute-0 podman[266448]: 2025-11-29 06:55:05.768849366 +0000 UTC m=+1.297584303 container attach c377d9274868bf0036e726523b0caae924b73e5f5c12fab35e1bbf2c8da9f6bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_goldwasser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 06:55:06 compute-0 infallible_goldwasser[266466]: --> passed data devices: 0 physical, 1 LVM
Nov 29 06:55:06 compute-0 infallible_goldwasser[266466]: --> relative data size: 1.0
Nov 29 06:55:06 compute-0 infallible_goldwasser[266466]: --> All data devices are unavailable
Nov 29 06:55:06 compute-0 systemd[1]: libpod-c377d9274868bf0036e726523b0caae924b73e5f5c12fab35e1bbf2c8da9f6bd.scope: Deactivated successfully.
Nov 29 06:55:06 compute-0 podman[266481]: 2025-11-29 06:55:06.716524787 +0000 UTC m=+0.050078074 container died c377d9274868bf0036e726523b0caae924b73e5f5c12fab35e1bbf2c8da9f6bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_goldwasser, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 06:55:06 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:55:06 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:55:06 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:55:06.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:55:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-21e7b691f5284613b52a9ba53b4c4a6763224119c250b4f27bd020b14f1567d1-merged.mount: Deactivated successfully.
Nov 29 06:55:07 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1280: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:55:07 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:55:07 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:55:07 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:55:07.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:55:07 compute-0 ceph-mon[74654]: pgmap v1280: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:55:07 compute-0 podman[266481]: 2025-11-29 06:55:07.88072801 +0000 UTC m=+1.214281287 container remove c377d9274868bf0036e726523b0caae924b73e5f5c12fab35e1bbf2c8da9f6bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_goldwasser, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 06:55:07 compute-0 systemd[1]: libpod-conmon-c377d9274868bf0036e726523b0caae924b73e5f5c12fab35e1bbf2c8da9f6bd.scope: Deactivated successfully.
Nov 29 06:55:07 compute-0 sudo[266336]: pam_unix(sudo:session): session closed for user root
Nov 29 06:55:08 compute-0 sudo[266497]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:55:08 compute-0 sudo[266497]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:55:08 compute-0 sudo[266497]: pam_unix(sudo:session): session closed for user root
Nov 29 06:55:08 compute-0 sudo[266522]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:55:08 compute-0 sudo[266522]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:55:08 compute-0 sudo[266522]: pam_unix(sudo:session): session closed for user root
Nov 29 06:55:08 compute-0 sudo[266547]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:55:08 compute-0 sudo[266547]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:55:08 compute-0 sudo[266547]: pam_unix(sudo:session): session closed for user root
Nov 29 06:55:08 compute-0 sudo[266572]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -- lvm list --format json
Nov 29 06:55:08 compute-0 sudo[266572]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:55:08 compute-0 podman[266639]: 2025-11-29 06:55:08.744238661 +0000 UTC m=+0.037307737 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:55:08 compute-0 podman[266639]: 2025-11-29 06:55:08.948434607 +0000 UTC m=+0.241503653 container create 50d263b9f35e47c6aead7f9b948f69b3565e3589aabef63ee0eaef9c005793d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_dirac, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 06:55:08 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:55:08 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:55:08 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:55:08.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:55:09 compute-0 systemd[1]: Started libpod-conmon-50d263b9f35e47c6aead7f9b948f69b3565e3589aabef63ee0eaef9c005793d0.scope.
Nov 29 06:55:09 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:55:09 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1281: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:55:09 compute-0 podman[266639]: 2025-11-29 06:55:09.166048928 +0000 UTC m=+0.459118064 container init 50d263b9f35e47c6aead7f9b948f69b3565e3589aabef63ee0eaef9c005793d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_dirac, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 29 06:55:09 compute-0 podman[266639]: 2025-11-29 06:55:09.177327595 +0000 UTC m=+0.470396641 container start 50d263b9f35e47c6aead7f9b948f69b3565e3589aabef63ee0eaef9c005793d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_dirac, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 06:55:09 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:55:09 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:55:09 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:55:09.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:55:09 compute-0 jovial_dirac[266655]: 167 167
Nov 29 06:55:09 compute-0 systemd[1]: libpod-50d263b9f35e47c6aead7f9b948f69b3565e3589aabef63ee0eaef9c005793d0.scope: Deactivated successfully.
Nov 29 06:55:09 compute-0 podman[266639]: 2025-11-29 06:55:09.5056214 +0000 UTC m=+0.798690476 container attach 50d263b9f35e47c6aead7f9b948f69b3565e3589aabef63ee0eaef9c005793d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_dirac, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 06:55:09 compute-0 podman[266639]: 2025-11-29 06:55:09.506395772 +0000 UTC m=+0.799464878 container died 50d263b9f35e47c6aead7f9b948f69b3565e3589aabef63ee0eaef9c005793d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_dirac, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 06:55:09 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:55:10 compute-0 ceph-mon[74654]: pgmap v1281: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:55:10 compute-0 nova_compute[251877]: 2025-11-29 06:55:10.800 251881 WARNING oslo.service.loopingcall [-] Function 'nova.servicegroup.drivers.db.DbDriver._report_state' run outlasted interval by 11.73 sec
Nov 29 06:55:10 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:55:10 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:55:10 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:55:10.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:55:11 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1282: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:55:11 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:55:11 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:55:11 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:55:11.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:55:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-628117b89897d8dcb258a273753dfc758c5d3fc2fca1b65dd9bc72bdf8fd1b59-merged.mount: Deactivated successfully.
Nov 29 06:55:12 compute-0 podman[266639]: 2025-11-29 06:55:12.848610452 +0000 UTC m=+4.141679528 container remove 50d263b9f35e47c6aead7f9b948f69b3565e3589aabef63ee0eaef9c005793d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_dirac, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 29 06:55:12 compute-0 systemd[1]: libpod-conmon-50d263b9f35e47c6aead7f9b948f69b3565e3589aabef63ee0eaef9c005793d0.scope: Deactivated successfully.
Nov 29 06:55:12 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:55:12 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:55:12 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:55:12.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:55:13 compute-0 ceph-mon[74654]: pgmap v1282: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:55:13 compute-0 podman[266681]: 2025-11-29 06:55:13.0208433 +0000 UTC m=+0.026736260 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:55:13 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1283: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:55:13 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:55:13 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:55:13 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:55:13.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:55:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 06:55:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:55:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 06:55:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:55:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:55:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:55:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:55:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:55:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:55:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:55:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:55:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:55:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 06:55:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:55:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:55:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:55:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Nov 29 06:55:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:55:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 06:55:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:55:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:55:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:55:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 06:55:13 compute-0 nova_compute[251877]: 2025-11-29 06:55:13.286 251881 DEBUG nova.compute.manager [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 06:55:13 compute-0 nova_compute[251877]: 2025-11-29 06:55:13.286 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 06:55:13 compute-0 nova_compute[251877]: 2025-11-29 06:55:13.287 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 06:55:13 compute-0 nova_compute[251877]: 2025-11-29 06:55:13.287 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 06:55:13 compute-0 nova_compute[251877]: 2025-11-29 06:55:13.287 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 06:55:13 compute-0 nova_compute[251877]: 2025-11-29 06:55:13.287 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 06:55:13 compute-0 nova_compute[251877]: 2025-11-29 06:55:13.287 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 06:55:13 compute-0 podman[266681]: 2025-11-29 06:55:13.331165042 +0000 UTC m=+0.337057922 container create afada81ee37eae556172e0a1f9a257136700ee26e56b530f77cd5a2a2b9fb915 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_curran, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 29 06:55:13 compute-0 systemd[1]: Started libpod-conmon-afada81ee37eae556172e0a1f9a257136700ee26e56b530f77cd5a2a2b9fb915.scope.
Nov 29 06:55:13 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:55:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a82dbba54450ae4145f2fef696b0fee767ca8dd74e9b0ffa6e7ab4cac9d0925c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 06:55:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a82dbba54450ae4145f2fef696b0fee767ca8dd74e9b0ffa6e7ab4cac9d0925c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:55:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a82dbba54450ae4145f2fef696b0fee767ca8dd74e9b0ffa6e7ab4cac9d0925c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:55:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a82dbba54450ae4145f2fef696b0fee767ca8dd74e9b0ffa6e7ab4cac9d0925c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 06:55:13 compute-0 podman[266681]: 2025-11-29 06:55:13.534531433 +0000 UTC m=+0.540424353 container init afada81ee37eae556172e0a1f9a257136700ee26e56b530f77cd5a2a2b9fb915 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_curran, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 06:55:13 compute-0 podman[266681]: 2025-11-29 06:55:13.547448995 +0000 UTC m=+0.553341865 container start afada81ee37eae556172e0a1f9a257136700ee26e56b530f77cd5a2a2b9fb915 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_curran, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 29 06:55:13 compute-0 podman[266681]: 2025-11-29 06:55:13.634757483 +0000 UTC m=+0.640650393 container attach afada81ee37eae556172e0a1f9a257136700ee26e56b530f77cd5a2a2b9fb915 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_curran, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 29 06:55:14 compute-0 nova_compute[251877]: 2025-11-29 06:55:14.076 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 06:55:14 compute-0 nova_compute[251877]: 2025-11-29 06:55:14.077 251881 DEBUG nova.compute.manager [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 06:55:14 compute-0 nova_compute[251877]: 2025-11-29 06:55:14.079 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 06:55:14 compute-0 relaxed_curran[266699]: {
Nov 29 06:55:14 compute-0 relaxed_curran[266699]:     "1": [
Nov 29 06:55:14 compute-0 relaxed_curran[266699]:         {
Nov 29 06:55:14 compute-0 relaxed_curran[266699]:             "devices": [
Nov 29 06:55:14 compute-0 relaxed_curran[266699]:                 "/dev/loop3"
Nov 29 06:55:14 compute-0 relaxed_curran[266699]:             ],
Nov 29 06:55:14 compute-0 relaxed_curran[266699]:             "lv_name": "ceph_lv0",
Nov 29 06:55:14 compute-0 relaxed_curran[266699]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 06:55:14 compute-0 relaxed_curran[266699]:             "lv_size": "7511998464",
Nov 29 06:55:14 compute-0 relaxed_curran[266699]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=336ec58c-893b-528f-a0c1-6ed1196bc047,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=91f280f1-e534-4adc-bf70-98711580c2dd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 06:55:14 compute-0 relaxed_curran[266699]:             "lv_uuid": "G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP",
Nov 29 06:55:14 compute-0 relaxed_curran[266699]:             "name": "ceph_lv0",
Nov 29 06:55:14 compute-0 relaxed_curran[266699]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 06:55:14 compute-0 relaxed_curran[266699]:             "tags": {
Nov 29 06:55:14 compute-0 relaxed_curran[266699]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 06:55:14 compute-0 relaxed_curran[266699]:                 "ceph.block_uuid": "G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP",
Nov 29 06:55:14 compute-0 relaxed_curran[266699]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 06:55:14 compute-0 relaxed_curran[266699]:                 "ceph.cluster_fsid": "336ec58c-893b-528f-a0c1-6ed1196bc047",
Nov 29 06:55:14 compute-0 relaxed_curran[266699]:                 "ceph.cluster_name": "ceph",
Nov 29 06:55:14 compute-0 relaxed_curran[266699]:                 "ceph.crush_device_class": "",
Nov 29 06:55:14 compute-0 relaxed_curran[266699]:                 "ceph.encrypted": "0",
Nov 29 06:55:14 compute-0 relaxed_curran[266699]:                 "ceph.osd_fsid": "91f280f1-e534-4adc-bf70-98711580c2dd",
Nov 29 06:55:14 compute-0 relaxed_curran[266699]:                 "ceph.osd_id": "1",
Nov 29 06:55:14 compute-0 relaxed_curran[266699]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 06:55:14 compute-0 relaxed_curran[266699]:                 "ceph.type": "block",
Nov 29 06:55:14 compute-0 relaxed_curran[266699]:                 "ceph.vdo": "0"
Nov 29 06:55:14 compute-0 relaxed_curran[266699]:             },
Nov 29 06:55:14 compute-0 relaxed_curran[266699]:             "type": "block",
Nov 29 06:55:14 compute-0 relaxed_curran[266699]:             "vg_name": "ceph_vg0"
Nov 29 06:55:14 compute-0 relaxed_curran[266699]:         }
Nov 29 06:55:14 compute-0 relaxed_curran[266699]:     ]
Nov 29 06:55:14 compute-0 relaxed_curran[266699]: }
Nov 29 06:55:14 compute-0 systemd[1]: libpod-afada81ee37eae556172e0a1f9a257136700ee26e56b530f77cd5a2a2b9fb915.scope: Deactivated successfully.
Nov 29 06:55:14 compute-0 podman[266681]: 2025-11-29 06:55:14.361982283 +0000 UTC m=+1.367875253 container died afada81ee37eae556172e0a1f9a257136700ee26e56b530f77cd5a2a2b9fb915 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_curran, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 29 06:55:14 compute-0 ceph-mon[74654]: pgmap v1283: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:55:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-a82dbba54450ae4145f2fef696b0fee767ca8dd74e9b0ffa6e7ab4cac9d0925c-merged.mount: Deactivated successfully.
Nov 29 06:55:14 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:55:14 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:55:14 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:55:14 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:55:14.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:55:15 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1284: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:55:15 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:55:15 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:55:15 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:55:15.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:55:15 compute-0 nova_compute[251877]: 2025-11-29 06:55:15.467 251881 DEBUG oslo_concurrency.lockutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 06:55:15 compute-0 nova_compute[251877]: 2025-11-29 06:55:15.468 251881 DEBUG oslo_concurrency.lockutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 06:55:15 compute-0 nova_compute[251877]: 2025-11-29 06:55:15.468 251881 DEBUG oslo_concurrency.lockutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 06:55:15 compute-0 nova_compute[251877]: 2025-11-29 06:55:15.469 251881 DEBUG nova.compute.resource_tracker [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 06:55:15 compute-0 nova_compute[251877]: 2025-11-29 06:55:15.469 251881 DEBUG oslo_concurrency.processutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 06:55:15 compute-0 ceph-mon[74654]: from='client.? 192.168.122.101:0/2860548019' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 06:55:15 compute-0 ceph-mon[74654]: pgmap v1284: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:55:15 compute-0 podman[266681]: 2025-11-29 06:55:15.94684235 +0000 UTC m=+2.952735220 container remove afada81ee37eae556172e0a1f9a257136700ee26e56b530f77cd5a2a2b9fb915 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_curran, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 29 06:55:15 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 06:55:15 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2736452741' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 06:55:15 compute-0 nova_compute[251877]: 2025-11-29 06:55:15.984 251881 DEBUG oslo_concurrency.processutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 06:55:15 compute-0 sudo[266572]: pam_unix(sudo:session): session closed for user root
Nov 29 06:55:16 compute-0 systemd[1]: libpod-conmon-afada81ee37eae556172e0a1f9a257136700ee26e56b530f77cd5a2a2b9fb915.scope: Deactivated successfully.
Nov 29 06:55:16 compute-0 sudo[266744]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:55:16 compute-0 sudo[266744]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:55:16 compute-0 sudo[266744]: pam_unix(sudo:session): session closed for user root
Nov 29 06:55:16 compute-0 sudo[266769]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:55:16 compute-0 sudo[266769]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:55:16 compute-0 sudo[266769]: pam_unix(sudo:session): session closed for user root
Nov 29 06:55:16 compute-0 nova_compute[251877]: 2025-11-29 06:55:16.192 251881 WARNING nova.virt.libvirt.driver [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 06:55:16 compute-0 nova_compute[251877]: 2025-11-29 06:55:16.194 251881 DEBUG nova.compute.resource_tracker [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5149MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 06:55:16 compute-0 nova_compute[251877]: 2025-11-29 06:55:16.194 251881 DEBUG oslo_concurrency.lockutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 06:55:16 compute-0 nova_compute[251877]: 2025-11-29 06:55:16.195 251881 DEBUG oslo_concurrency.lockutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 06:55:16 compute-0 sudo[266794]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:55:16 compute-0 sudo[266794]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:55:16 compute-0 sudo[266794]: pam_unix(sudo:session): session closed for user root
Nov 29 06:55:16 compute-0 sudo[266819]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -- raw list --format json
Nov 29 06:55:16 compute-0 sudo[266819]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:55:16 compute-0 nova_compute[251877]: 2025-11-29 06:55:16.754 251881 DEBUG nova.compute.resource_tracker [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 06:55:16 compute-0 nova_compute[251877]: 2025-11-29 06:55:16.756 251881 DEBUG nova.compute.resource_tracker [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 06:55:16 compute-0 nova_compute[251877]: 2025-11-29 06:55:16.784 251881 DEBUG oslo_concurrency.processutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 06:55:16 compute-0 podman[266886]: 2025-11-29 06:55:16.697334623 +0000 UTC m=+0.025094495 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:55:16 compute-0 podman[266886]: 2025-11-29 06:55:16.854057837 +0000 UTC m=+0.181817669 container create bff865cfce159636f07c022d1dbddad749d61d9662d94b3170733b768715a9cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_mccarthy, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 29 06:55:16 compute-0 systemd[1]: Started libpod-conmon-bff865cfce159636f07c022d1dbddad749d61d9662d94b3170733b768715a9cb.scope.
Nov 29 06:55:16 compute-0 ceph-mon[74654]: from='client.? 192.168.122.102:0/3008667807' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 06:55:16 compute-0 ceph-mon[74654]: from='client.? 192.168.122.100:0/2736452741' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 06:55:16 compute-0 ceph-mon[74654]: from='client.? 192.168.122.101:0/595463311' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 06:55:16 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:55:16 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:55:16 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:55:16 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:55:16.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:55:17 compute-0 podman[266886]: 2025-11-29 06:55:17.096499975 +0000 UTC m=+0.424259847 container init bff865cfce159636f07c022d1dbddad749d61d9662d94b3170733b768715a9cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_mccarthy, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 29 06:55:17 compute-0 podman[266886]: 2025-11-29 06:55:17.104258262 +0000 UTC m=+0.432018084 container start bff865cfce159636f07c022d1dbddad749d61d9662d94b3170733b768715a9cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_mccarthy, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 29 06:55:17 compute-0 jovial_mccarthy[266922]: 167 167
Nov 29 06:55:17 compute-0 systemd[1]: libpod-bff865cfce159636f07c022d1dbddad749d61d9662d94b3170733b768715a9cb.scope: Deactivated successfully.
Nov 29 06:55:17 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1285: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:55:17 compute-0 podman[266886]: 2025-11-29 06:55:17.179653446 +0000 UTC m=+0.507413318 container attach bff865cfce159636f07c022d1dbddad749d61d9662d94b3170733b768715a9cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_mccarthy, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 29 06:55:17 compute-0 podman[266886]: 2025-11-29 06:55:17.180209172 +0000 UTC m=+0.507969004 container died bff865cfce159636f07c022d1dbddad749d61d9662d94b3170733b768715a9cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_mccarthy, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 29 06:55:17 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:55:17 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:55:17 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:55:17.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:55:17 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 06:55:17 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/570431191' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 06:55:17 compute-0 nova_compute[251877]: 2025-11-29 06:55:17.226 251881 DEBUG oslo_concurrency.processutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 06:55:17 compute-0 nova_compute[251877]: 2025-11-29 06:55:17.234 251881 DEBUG nova.compute.provider_tree [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Inventory has not changed in ProviderTree for provider: 36ed0248-8d04-4532-95bb-daab89f12202 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 06:55:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:55:17.248 157767 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 06:55:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:55:17.249 157767 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 06:55:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:55:17.250 157767 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 06:55:18 compute-0 nova_compute[251877]: 2025-11-29 06:55:18.285 251881 DEBUG nova.scheduler.client.report [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Inventory has not changed for provider 36ed0248-8d04-4532-95bb-daab89f12202 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 06:55:18 compute-0 nova_compute[251877]: 2025-11-29 06:55:18.288 251881 DEBUG nova.compute.resource_tracker [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 06:55:18 compute-0 nova_compute[251877]: 2025-11-29 06:55:18.288 251881 DEBUG oslo_concurrency.lockutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.094s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 06:55:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-d0a701160b6324f0e1216325ff493fa2c61a8ec820a8061bc3a409b403325f17-merged.mount: Deactivated successfully.
Nov 29 06:55:18 compute-0 sudo[266953]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:55:18 compute-0 sudo[266953]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:55:18 compute-0 sudo[266953]: pam_unix(sudo:session): session closed for user root
Nov 29 06:55:18 compute-0 sudo[266978]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:55:18 compute-0 ceph-mon[74654]: pgmap v1285: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:55:18 compute-0 ceph-mon[74654]: from='client.? 192.168.122.100:0/570431191' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 06:55:18 compute-0 sudo[266978]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:55:18 compute-0 sudo[266978]: pam_unix(sudo:session): session closed for user root
Nov 29 06:55:18 compute-0 podman[266886]: 2025-11-29 06:55:18.579548696 +0000 UTC m=+1.907308548 container remove bff865cfce159636f07c022d1dbddad749d61d9662d94b3170733b768715a9cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_mccarthy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True)
Nov 29 06:55:18 compute-0 systemd[1]: libpod-conmon-bff865cfce159636f07c022d1dbddad749d61d9662d94b3170733b768715a9cb.scope: Deactivated successfully.
Nov 29 06:55:18 compute-0 podman[266942]: 2025-11-29 06:55:18.689553531 +0000 UTC m=+0.808466610 container health_status 843911ed0b6203707f0633a7e737420fbf54d55170a2d9cdc86db1752ff76af8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 29 06:55:18 compute-0 podman[267021]: 2025-11-29 06:55:18.739552013 +0000 UTC m=+0.021251197 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:55:18 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:55:18 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:55:18 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:55:18.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:55:19 compute-0 podman[267021]: 2025-11-29 06:55:19.008250176 +0000 UTC m=+0.289949340 container create e2672228c828e6899de4c93bb090a26cc868553d443295571915b3f6272ffd82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_pasteur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 29 06:55:19 compute-0 systemd[1]: Started libpod-conmon-e2672228c828e6899de4c93bb090a26cc868553d443295571915b3f6272ffd82.scope.
Nov 29 06:55:19 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1286: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:55:19 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:55:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18437828a75201bec05d5a39490546f409a21d192731f4496c7594472c8532ab/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 06:55:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18437828a75201bec05d5a39490546f409a21d192731f4496c7594472c8532ab/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:55:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18437828a75201bec05d5a39490546f409a21d192731f4496c7594472c8532ab/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:55:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18437828a75201bec05d5a39490546f409a21d192731f4496c7594472c8532ab/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 06:55:19 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:55:19 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:55:19 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:55:19.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:55:19 compute-0 podman[267021]: 2025-11-29 06:55:19.293086683 +0000 UTC m=+0.574785937 container init e2672228c828e6899de4c93bb090a26cc868553d443295571915b3f6272ffd82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_pasteur, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 06:55:19 compute-0 podman[267021]: 2025-11-29 06:55:19.301296423 +0000 UTC m=+0.582995627 container start e2672228c828e6899de4c93bb090a26cc868553d443295571915b3f6272ffd82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_pasteur, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 06:55:19 compute-0 podman[267021]: 2025-11-29 06:55:19.305401388 +0000 UTC m=+0.587100592 container attach e2672228c828e6899de4c93bb090a26cc868553d443295571915b3f6272ffd82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_pasteur, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True)
Nov 29 06:55:19 compute-0 ceph-mon[74654]: pgmap v1286: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:55:19 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:55:20 compute-0 xenodochial_pasteur[267038]: {
Nov 29 06:55:20 compute-0 xenodochial_pasteur[267038]:     "91f280f1-e534-4adc-bf70-98711580c2dd": {
Nov 29 06:55:20 compute-0 xenodochial_pasteur[267038]:         "ceph_fsid": "336ec58c-893b-528f-a0c1-6ed1196bc047",
Nov 29 06:55:20 compute-0 xenodochial_pasteur[267038]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 06:55:20 compute-0 xenodochial_pasteur[267038]:         "osd_id": 1,
Nov 29 06:55:20 compute-0 xenodochial_pasteur[267038]:         "osd_uuid": "91f280f1-e534-4adc-bf70-98711580c2dd",
Nov 29 06:55:20 compute-0 xenodochial_pasteur[267038]:         "type": "bluestore"
Nov 29 06:55:20 compute-0 xenodochial_pasteur[267038]:     }
Nov 29 06:55:20 compute-0 xenodochial_pasteur[267038]: }
Nov 29 06:55:20 compute-0 systemd[1]: libpod-e2672228c828e6899de4c93bb090a26cc868553d443295571915b3f6272ffd82.scope: Deactivated successfully.
Nov 29 06:55:20 compute-0 podman[267021]: 2025-11-29 06:55:20.258137982 +0000 UTC m=+1.539837186 container died e2672228c828e6899de4c93bb090a26cc868553d443295571915b3f6272ffd82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_pasteur, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 06:55:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-18437828a75201bec05d5a39490546f409a21d192731f4496c7594472c8532ab-merged.mount: Deactivated successfully.
Nov 29 06:55:20 compute-0 podman[267021]: 2025-11-29 06:55:20.367472007 +0000 UTC m=+1.649171211 container remove e2672228c828e6899de4c93bb090a26cc868553d443295571915b3f6272ffd82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_pasteur, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef)
Nov 29 06:55:20 compute-0 systemd[1]: libpod-conmon-e2672228c828e6899de4c93bb090a26cc868553d443295571915b3f6272ffd82.scope: Deactivated successfully.
Nov 29 06:55:20 compute-0 sudo[266819]: pam_unix(sudo:session): session closed for user root
Nov 29 06:55:20 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 06:55:20 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:55:20 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 06:55:20 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:55:20 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev c6920549-8192-4b33-84bc-a6cab230da56 does not exist
Nov 29 06:55:20 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev 5f87753c-690a-4b6d-b7b9-ac670c19aff6 does not exist
Nov 29 06:55:20 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev 6c8a7437-2a72-470e-8fa2-0ff6f8899a10 does not exist
Nov 29 06:55:20 compute-0 sudo[267071]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:55:20 compute-0 sudo[267071]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:55:20 compute-0 sudo[267071]: pam_unix(sudo:session): session closed for user root
Nov 29 06:55:20 compute-0 sudo[267096]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 06:55:20 compute-0 sudo[267096]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:55:20 compute-0 sudo[267096]: pam_unix(sudo:session): session closed for user root
Nov 29 06:55:20 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:55:20 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:55:20 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:55:20.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:55:21 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1287: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:55:21 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:55:21 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:55:21 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:55:21.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:55:21 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:55:21 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:55:21 compute-0 ceph-mon[74654]: pgmap v1287: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:55:22 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:55:22 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:55:22 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:55:22.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:55:23 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1288: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:55:23 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:55:23 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:55:23 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:55:23.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:55:23 compute-0 ceph-mon[74654]: pgmap v1288: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:55:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:55:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:55:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:55:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:55:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:55:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:55:24 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:55:24 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:55:24 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:55:24 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:55:24.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:55:25 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1289: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:55:25 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:55:25 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:55:25 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:55:25.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:55:26 compute-0 ceph-mon[74654]: pgmap v1289: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:55:27 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:55:27 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:55:27 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:55:26.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:55:27 compute-0 sshd-session[267124]: Received disconnect from 193.163.72.91 port 46940:11: Bye Bye [preauth]
Nov 29 06:55:27 compute-0 sshd-session[267124]: Disconnected from authenticating user root 193.163.72.91 port 46940 [preauth]
Nov 29 06:55:27 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1290: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:55:27 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:55:27 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:55:27 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:55:27.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:55:28 compute-0 ceph-mon[74654]: pgmap v1290: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:55:29 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:55:29 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:55:29 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:55:29.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:55:29 compute-0 podman[267127]: 2025-11-29 06:55:29.130636352 +0000 UTC m=+0.088802941 container health_status 81ea2bcb89266a0110a379c2083d8cc042460d4a35c8ed3bf349dd1083925000 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 29 06:55:29 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1291: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:55:29 compute-0 podman[267128]: 2025-11-29 06:55:29.140494058 +0000 UTC m=+0.100463638 container health_status b3f42e9a710907b47913576d27471d163da731262c1464357cff24681ce600c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 06:55:29 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:55:29 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:55:29 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:55:29.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:55:29 compute-0 ceph-mon[74654]: pgmap v1291: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:55:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 06:55:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 06:55:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 06:55:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 06:55:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 06:55:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 06:55:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 06:55:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 06:55:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 06:55:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 06:55:29 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:55:30 compute-0 sshd-session[267172]: Invalid user student from 103.31.39.143 port 33798
Nov 29 06:55:31 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:55:31 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:55:31 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:55:31.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:55:31 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1292: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:55:31 compute-0 sshd-session[267172]: Received disconnect from 103.31.39.143 port 33798:11: Bye Bye [preauth]
Nov 29 06:55:31 compute-0 sshd-session[267172]: Disconnected from invalid user student 103.31.39.143 port 33798 [preauth]
Nov 29 06:55:31 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:55:31 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:55:31 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:55:31.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:55:32 compute-0 ceph-mon[74654]: pgmap v1292: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:55:33 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:55:33 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:55:33 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:55:33.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:55:33 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1293: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:55:33 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:55:33 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:55:33 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:55:33.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:55:33 compute-0 ceph-mon[74654]: pgmap v1293: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:55:34 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:55:35 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:55:35 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:55:35 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:55:35.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:55:35 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1294: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:55:35 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:55:35 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:55:35 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:55:35.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:55:35 compute-0 ceph-mon[74654]: pgmap v1294: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:55:35 compute-0 sshd[185364]: Timeout before authentication for connection from 45.78.221.93 to 38.102.83.22, pid = 264729
Nov 29 06:55:37 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:55:37 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:55:37 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:55:37.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:55:37 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1295: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:55:37 compute-0 ceph-mon[74654]: from='client.? 192.168.122.10:0/212983180' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 06:55:37 compute-0 ceph-mon[74654]: from='client.? 192.168.122.10:0/212983180' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 06:55:37 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:55:37 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:55:37 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:55:37.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:55:38 compute-0 ceph-mon[74654]: pgmap v1295: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:55:38 compute-0 sudo[267180]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:55:38 compute-0 sudo[267180]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:55:38 compute-0 sudo[267180]: pam_unix(sudo:session): session closed for user root
Nov 29 06:55:38 compute-0 sudo[267205]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:55:38 compute-0 sudo[267205]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:55:38 compute-0 sudo[267205]: pam_unix(sudo:session): session closed for user root
Nov 29 06:55:39 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:55:39 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:55:39 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:55:39.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:55:39 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1296: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:55:39 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:55:39 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:55:39 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:55:39.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:55:39 compute-0 ceph-mon[74654]: pgmap v1296: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:55:39 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:55:40 compute-0 sshd-session[267233]: Invalid user fiscal from 162.214.92.14 port 43030
Nov 29 06:55:40 compute-0 sshd-session[267178]: Received disconnect from 103.63.25.115 port 49054:11: Bye Bye [preauth]
Nov 29 06:55:40 compute-0 sshd-session[267178]: Disconnected from authenticating user root 103.63.25.115 port 49054 [preauth]
Nov 29 06:55:40 compute-0 sshd-session[267233]: Received disconnect from 162.214.92.14 port 43030:11: Bye Bye [preauth]
Nov 29 06:55:40 compute-0 sshd-session[267233]: Disconnected from invalid user fiscal 162.214.92.14 port 43030 [preauth]
Nov 29 06:55:40 compute-0 sshd-session[267231]: Invalid user jose from 197.13.24.157 port 43530
Nov 29 06:55:40 compute-0 sshd-session[267231]: Received disconnect from 197.13.24.157 port 43530:11: Bye Bye [preauth]
Nov 29 06:55:40 compute-0 sshd-session[267231]: Disconnected from invalid user jose 197.13.24.157 port 43530 [preauth]
Nov 29 06:55:41 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:55:41 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:55:41 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:55:41.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:55:41 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1297: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:55:41 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:55:41 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:55:41 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:55:41.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:55:42 compute-0 ceph-mon[74654]: pgmap v1297: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:55:43 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:55:43 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:55:43 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:55:43.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:55:43 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1298: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:55:43 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:55:43 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:55:43 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:55:43.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:55:43 compute-0 sshd-session[267236]: Received disconnect from 103.143.238.173 port 56238:11: Bye Bye [preauth]
Nov 29 06:55:43 compute-0 sshd-session[267236]: Disconnected from authenticating user root 103.143.238.173 port 56238 [preauth]
Nov 29 06:55:43 compute-0 ceph-mon[74654]: pgmap v1298: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:55:44 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:55:45 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:55:45 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:55:45 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:55:45.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:55:45 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1299: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:55:45 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:55:45 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:55:45 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:55:45.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:55:46 compute-0 ceph-mon[74654]: pgmap v1299: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:55:47 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:55:47 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:55:47 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:55:47.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:55:47 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1300: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:55:47 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:55:47 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:55:47 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:55:47.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:55:47 compute-0 ceph-mon[74654]: pgmap v1300: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:55:49 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:55:49 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:55:49 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:55:49.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:55:49 compute-0 podman[267241]: 2025-11-29 06:55:49.138115868 +0000 UTC m=+0.094931582 container health_status 843911ed0b6203707f0633a7e737420fbf54d55170a2d9cdc86db1752ff76af8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd)
Nov 29 06:55:49 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1301: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:55:49 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:55:49 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:55:49 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:55:49.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:55:49 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:55:50 compute-0 ceph-mon[74654]: pgmap v1301: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:55:51 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:55:51 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:55:51 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:55:51.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:55:51 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1302: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:55:51 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:55:51 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:55:51 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:55:51.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:55:51 compute-0 ceph-mon[74654]: pgmap v1302: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:55:53 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:55:53 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:55:53 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:55:53.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:55:53 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1303: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:55:53 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:55:53 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:55:53 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:55:53.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:55:54 compute-0 ceph-mon[74654]: pgmap v1303: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:55:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:55:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:55:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:55:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:55:54 compute-0 ceph-mgr[74948]: [balancer INFO root] Optimize plan auto_2025-11-29_06:55:54
Nov 29 06:55:54 compute-0 ceph-mgr[74948]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 06:55:54 compute-0 ceph-mgr[74948]: [balancer INFO root] do_upmap
Nov 29 06:55:54 compute-0 ceph-mgr[74948]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.meta', 'backups', 'default.rgw.control', 'vms', 'cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.meta', 'images', '.rgw.root', 'volumes']
Nov 29 06:55:54 compute-0 ceph-mgr[74948]: [balancer INFO root] prepared 0/10 changes
Nov 29 06:55:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:55:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:55:54 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:55:55 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:55:55 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:55:55 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:55:55.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:55:55 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1304: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:55:55 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:55:55 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:55:55 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:55:55.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:55:56 compute-0 ceph-mon[74654]: pgmap v1304: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:55:57 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:55:57 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:55:57 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:55:57.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:55:57 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1305: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:55:57 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:55:57 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:55:57 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:55:57.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:55:57 compute-0 ceph-mon[74654]: pgmap v1305: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:55:58 compute-0 sudo[267267]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:55:58 compute-0 sudo[267267]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:55:58 compute-0 sudo[267267]: pam_unix(sudo:session): session closed for user root
Nov 29 06:55:58 compute-0 sudo[267292]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:55:58 compute-0 sudo[267292]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:55:58 compute-0 sudo[267292]: pam_unix(sudo:session): session closed for user root
Nov 29 06:55:59 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:55:59 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:55:59 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:55:59.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:55:59 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1306: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:55:59 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:55:59 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:55:59 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:55:59.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:55:59 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:56:00 compute-0 podman[267318]: 2025-11-29 06:56:00.106124082 +0000 UTC m=+0.063041998 container health_status 81ea2bcb89266a0110a379c2083d8cc042460d4a35c8ed3bf349dd1083925000 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 29 06:56:00 compute-0 podman[267319]: 2025-11-29 06:56:00.144221981 +0000 UTC m=+0.105509660 container health_status b3f42e9a710907b47913576d27471d163da731262c1464357cff24681ce600c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 06:56:00 compute-0 ceph-mon[74654]: pgmap v1306: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:56:01 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:56:01 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:56:01 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:56:01.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:56:01 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1307: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:56:01 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:56:01 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:56:01 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:56:01.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:56:02 compute-0 sshd-session[267362]: Invalid user csgoserver from 27.112.78.245 port 54268
Nov 29 06:56:02 compute-0 ceph-mon[74654]: pgmap v1307: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:56:02 compute-0 sshd-session[267362]: Received disconnect from 27.112.78.245 port 54268:11: Bye Bye [preauth]
Nov 29 06:56:02 compute-0 sshd-session[267362]: Disconnected from invalid user csgoserver 27.112.78.245 port 54268 [preauth]
Nov 29 06:56:03 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:56:03 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:56:03 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:56:03.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:56:03 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1308: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:56:03 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:56:03 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:56:03 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:56:03.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:56:04 compute-0 ceph-mon[74654]: pgmap v1308: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:56:04 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:56:05 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:56:05 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:56:05 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:56:05.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:56:05 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1309: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:56:05 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:56:05 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000029s ======
Nov 29 06:56:05 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:56:05.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 29 06:56:05 compute-0 nova_compute[251877]: 2025-11-29 06:56:05.485 251881 WARNING oslo.service.loopingcall [-] Function 'nova.servicegroup.drivers.db.DbDriver._report_state' run outlasted interval by 4.68 sec
Nov 29 06:56:05 compute-0 ceph-mon[74654]: pgmap v1309: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:56:07 compute-0 ceph-mon[74654]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 06:56:07 compute-0 ceph-mon[74654]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.0 total, 600.0 interval
                                           Cumulative writes: 5513 writes, 24K keys, 5513 commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s
                                           Cumulative WAL: 5513 writes, 5513 syncs, 1.00 writes per sync, written: 0.04 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1431 writes, 5940 keys, 1431 commit groups, 1.0 writes per commit group, ingest: 10.24 MB, 0.02 MB/s
                                           Interval WAL: 1431 writes, 1431 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      9.9      3.03              0.12        13    0.233       0      0       0.0       0.0
                                             L6      1/0    8.83 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.6     28.1     23.4      4.63              0.42        12    0.386     60K   6313       0.0       0.0
                                            Sum      1/0    8.83 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.6     16.9     18.1      7.66              0.54        25    0.306     60K   6313       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   6.2     33.1     33.0      1.19              0.19         8    0.149     21K   1990       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0     28.1     23.4      4.63              0.42        12    0.386     60K   6313       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      9.9      3.03              0.12        12    0.252       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     12.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 2400.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.029, interval 0.006
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.14 GB write, 0.06 MB/s write, 0.13 GB read, 0.05 MB/s read, 7.7 seconds
                                           Interval compaction: 0.04 GB write, 0.07 MB/s write, 0.04 GB read, 0.07 MB/s read, 1.2 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e1a58311f0#2 capacity: 304.00 MB usage: 11.17 MB table_size: 0 occupancy: 18446744073709551615 collections: 5 last_copies: 0 last_secs: 0.000111 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(605,10.69 MB,3.51709%) FilterBlock(26,169.55 KB,0.0544648%) IndexBlock(26,319.33 KB,0.10258%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 29 06:56:07 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:56:07 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:56:07 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:56:07.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:56:07 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1310: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:56:07 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:56:07 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:56:07 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:56:07.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:56:07 compute-0 sshd-session[267366]: Received disconnect from 49.247.35.31 port 41335:11: Bye Bye [preauth]
Nov 29 06:56:07 compute-0 sshd-session[267366]: Disconnected from authenticating user root 49.247.35.31 port 41335 [preauth]
Nov 29 06:56:08 compute-0 ceph-mon[74654]: pgmap v1310: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:56:09 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:56:09 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:56:09 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:56:09.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:56:09 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1311: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:56:09 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:56:09 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:56:09 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:56:09.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:56:09 compute-0 sshd[185364]: drop connection #0 from [45.78.221.93]:45292 on [38.102.83.22]:22 penalty: exceeded LoginGraceTime
Nov 29 06:56:09 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:56:10 compute-0 ceph-mon[74654]: pgmap v1311: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:56:11 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:56:11 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:56:11 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:56:11.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:56:11 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1312: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:56:11 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:56:11 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:56:11 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:56:11.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:56:11 compute-0 sshd-session[267370]: Received disconnect from 118.193.39.127 port 47572:11: Bye Bye [preauth]
Nov 29 06:56:11 compute-0 sshd-session[267370]: Disconnected from authenticating user root 118.193.39.127 port 47572 [preauth]
Nov 29 06:56:11 compute-0 ceph-mon[74654]: pgmap v1312: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:56:13 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:56:13 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:56:13 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:56:13.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:56:13 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1313: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:56:13 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:56:13 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:56:13 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:56:13.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:56:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 06:56:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:56:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 06:56:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:56:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:56:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:56:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:56:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:56:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:56:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:56:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:56:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:56:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 06:56:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:56:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:56:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:56:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Nov 29 06:56:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:56:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 06:56:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:56:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:56:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:56:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 06:56:14 compute-0 ceph-mon[74654]: pgmap v1313: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:56:14 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:56:15 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:56:15 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 06:56:15 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:56:15.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 06:56:15 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1314: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:56:15 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:56:15 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:56:15 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:56:15.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:56:15 compute-0 ceph-mon[74654]: pgmap v1314: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:56:17 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:56:17 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:56:17 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:56:17.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:56:17 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1315: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:56:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:56:17.248 157767 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 06:56:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:56:17.249 157767 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 06:56:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:56:17.249 157767 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 06:56:17 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:56:17 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:56:17 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:56:17.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:56:18 compute-0 nova_compute[251877]: 2025-11-29 06:56:18.291 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 06:56:18 compute-0 nova_compute[251877]: 2025-11-29 06:56:18.292 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 06:56:18 compute-0 ceph-mon[74654]: pgmap v1315: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:56:18 compute-0 nova_compute[251877]: 2025-11-29 06:56:18.498 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 06:56:18 compute-0 nova_compute[251877]: 2025-11-29 06:56:18.499 251881 DEBUG nova.compute.manager [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 06:56:18 compute-0 nova_compute[251877]: 2025-11-29 06:56:18.499 251881 DEBUG nova.compute.manager [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 06:56:18 compute-0 nova_compute[251877]: 2025-11-29 06:56:18.700 251881 DEBUG nova.compute.manager [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 06:56:18 compute-0 nova_compute[251877]: 2025-11-29 06:56:18.703 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 06:56:18 compute-0 nova_compute[251877]: 2025-11-29 06:56:18.703 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 06:56:18 compute-0 nova_compute[251877]: 2025-11-29 06:56:18.704 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 06:56:18 compute-0 nova_compute[251877]: 2025-11-29 06:56:18.704 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 06:56:18 compute-0 nova_compute[251877]: 2025-11-29 06:56:18.704 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 06:56:18 compute-0 nova_compute[251877]: 2025-11-29 06:56:18.705 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 06:56:18 compute-0 nova_compute[251877]: 2025-11-29 06:56:18.705 251881 DEBUG nova.compute.manager [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 06:56:18 compute-0 nova_compute[251877]: 2025-11-29 06:56:18.706 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 06:56:18 compute-0 nova_compute[251877]: 2025-11-29 06:56:18.821 251881 DEBUG oslo_concurrency.lockutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 06:56:18 compute-0 nova_compute[251877]: 2025-11-29 06:56:18.822 251881 DEBUG oslo_concurrency.lockutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 06:56:18 compute-0 nova_compute[251877]: 2025-11-29 06:56:18.822 251881 DEBUG oslo_concurrency.lockutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 06:56:18 compute-0 nova_compute[251877]: 2025-11-29 06:56:18.823 251881 DEBUG nova.compute.resource_tracker [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 06:56:18 compute-0 nova_compute[251877]: 2025-11-29 06:56:18.823 251881 DEBUG oslo_concurrency.processutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 06:56:18 compute-0 sshd-session[267376]: Invalid user ubuntu from 176.109.67.96 port 36706
Nov 29 06:56:19 compute-0 sudo[267380]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:56:19 compute-0 sudo[267380]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:56:19 compute-0 sudo[267380]: pam_unix(sudo:session): session closed for user root
Nov 29 06:56:19 compute-0 sudo[267423]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:56:19 compute-0 sudo[267423]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:56:19 compute-0 sudo[267423]: pam_unix(sudo:session): session closed for user root
Nov 29 06:56:19 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:56:19 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:56:19 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:56:19.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:56:19 compute-0 sshd-session[267376]: Received disconnect from 176.109.67.96 port 36706:11: Bye Bye [preauth]
Nov 29 06:56:19 compute-0 sshd-session[267376]: Disconnected from invalid user ubuntu 176.109.67.96 port 36706 [preauth]
Nov 29 06:56:19 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1316: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:56:19 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:56:19 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:56:19 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:56:19.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:56:19 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 06:56:19 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1431858114' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 06:56:19 compute-0 nova_compute[251877]: 2025-11-29 06:56:19.306 251881 DEBUG oslo_concurrency.processutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 06:56:19 compute-0 ceph-mon[74654]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #54. Immutable memtables: 0.
Nov 29 06:56:19 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:56:19.377819) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 06:56:19 compute-0 ceph-mon[74654]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 54
Nov 29 06:56:19 compute-0 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764399379378209, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 2079, "num_deletes": 251, "total_data_size": 3950000, "memory_usage": 4013304, "flush_reason": "Manual Compaction"}
Nov 29 06:56:19 compute-0 ceph-mon[74654]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #55: started
Nov 29 06:56:19 compute-0 ceph-mon[74654]: from='client.? 192.168.122.102:0/217955619' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 06:56:19 compute-0 ceph-mon[74654]: from='client.? 192.168.122.101:0/3654944032' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 06:56:19 compute-0 ceph-mon[74654]: from='client.? 192.168.122.100:0/1431858114' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 06:56:19 compute-0 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764399379404042, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 55, "file_size": 3885925, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 22800, "largest_seqno": 24878, "table_properties": {"data_size": 3876525, "index_size": 5958, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18781, "raw_average_key_size": 20, "raw_value_size": 3857883, "raw_average_value_size": 4130, "num_data_blocks": 266, "num_entries": 934, "num_filter_entries": 934, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764399150, "oldest_key_time": 1764399150, "file_creation_time": 1764399379, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cb6c8f8f-b3b4-4901-9b8e-6f9d7b0da908", "db_session_id": "VL4WOW4AK06DDHF5VQBP", "orig_file_number": 55, "seqno_to_time_mapping": "N/A"}}
Nov 29 06:56:19 compute-0 ceph-mon[74654]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 26214 microseconds, and 9293 cpu microseconds.
Nov 29 06:56:19 compute-0 ceph-mon[74654]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 06:56:19 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:56:19.404115) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #55: 3885925 bytes OK
Nov 29 06:56:19 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:56:19.404146) [db/memtable_list.cc:519] [default] Level-0 commit table #55 started
Nov 29 06:56:19 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:56:19.405911) [db/memtable_list.cc:722] [default] Level-0 commit table #55: memtable #1 done
Nov 29 06:56:19 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:56:19.406007) EVENT_LOG_v1 {"time_micros": 1764399379405998, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 06:56:19 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:56:19.406036) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 06:56:19 compute-0 ceph-mon[74654]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 3941580, prev total WAL file size 3941580, number of live WAL files 2.
Nov 29 06:56:19 compute-0 ceph-mon[74654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000051.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 06:56:19 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:56:19.408208) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031373537' seq:72057594037927935, type:22 .. '7061786F730032303039' seq:0, type:0; will stop at (end)
Nov 29 06:56:19 compute-0 ceph-mon[74654]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 06:56:19 compute-0 ceph-mon[74654]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [55(3794KB)], [53(9042KB)]
Nov 29 06:56:19 compute-0 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764399379408270, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [55], "files_L6": [53], "score": -1, "input_data_size": 13144976, "oldest_snapshot_seqno": -1}
Nov 29 06:56:19 compute-0 ceph-mon[74654]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #56: 5293 keys, 11153902 bytes, temperature: kUnknown
Nov 29 06:56:19 compute-0 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764399379484966, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 56, "file_size": 11153902, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11116072, "index_size": 23512, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13253, "raw_key_size": 133939, "raw_average_key_size": 25, "raw_value_size": 11017709, "raw_average_value_size": 2081, "num_data_blocks": 968, "num_entries": 5293, "num_filter_entries": 5293, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764396963, "oldest_key_time": 0, "file_creation_time": 1764399379, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cb6c8f8f-b3b4-4901-9b8e-6f9d7b0da908", "db_session_id": "VL4WOW4AK06DDHF5VQBP", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}}
Nov 29 06:56:19 compute-0 ceph-mon[74654]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 06:56:19 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:56:19.485315) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 11153902 bytes
Nov 29 06:56:19 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:56:19.486623) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 171.0 rd, 145.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.7, 8.8 +0.0 blob) out(10.6 +0.0 blob), read-write-amplify(6.3) write-amplify(2.9) OK, records in: 5810, records dropped: 517 output_compression: NoCompression
Nov 29 06:56:19 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:56:19.486643) EVENT_LOG_v1 {"time_micros": 1764399379486633, "job": 28, "event": "compaction_finished", "compaction_time_micros": 76860, "compaction_time_cpu_micros": 25602, "output_level": 6, "num_output_files": 1, "total_output_size": 11153902, "num_input_records": 5810, "num_output_records": 5293, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 06:56:19 compute-0 ceph-mon[74654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000055.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 06:56:19 compute-0 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764399379487581, "job": 28, "event": "table_file_deletion", "file_number": 55}
Nov 29 06:56:19 compute-0 ceph-mon[74654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 06:56:19 compute-0 ceph-mon[74654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764399379489501, "job": 28, "event": "table_file_deletion", "file_number": 53}
Nov 29 06:56:19 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:56:19.407991) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 06:56:19 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:56:19.489600) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 06:56:19 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:56:19.489607) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 06:56:19 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:56:19.489610) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 06:56:19 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:56:19.489613) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 06:56:19 compute-0 ceph-mon[74654]: rocksdb: (Original Log Time 2025/11/29-06:56:19.489616) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 06:56:19 compute-0 nova_compute[251877]: 2025-11-29 06:56:19.533 251881 WARNING nova.virt.libvirt.driver [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 06:56:19 compute-0 nova_compute[251877]: 2025-11-29 06:56:19.534 251881 DEBUG nova.compute.resource_tracker [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5204MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 06:56:19 compute-0 nova_compute[251877]: 2025-11-29 06:56:19.535 251881 DEBUG oslo_concurrency.lockutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 06:56:19 compute-0 nova_compute[251877]: 2025-11-29 06:56:19.535 251881 DEBUG oslo_concurrency.lockutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 06:56:19 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:56:20 compute-0 podman[267451]: 2025-11-29 06:56:20.108350687 +0000 UTC m=+0.075905228 container health_status 843911ed0b6203707f0633a7e737420fbf54d55170a2d9cdc86db1752ff76af8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, managed_by=edpm_ansible)
Nov 29 06:56:20 compute-0 ceph-mon[74654]: pgmap v1316: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:56:20 compute-0 nova_compute[251877]: 2025-11-29 06:56:20.503 251881 DEBUG nova.compute.resource_tracker [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 06:56:20 compute-0 nova_compute[251877]: 2025-11-29 06:56:20.504 251881 DEBUG nova.compute.resource_tracker [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 06:56:20 compute-0 nova_compute[251877]: 2025-11-29 06:56:20.600 251881 DEBUG nova.scheduler.client.report [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Refreshing inventories for resource provider 36ed0248-8d04-4532-95bb-daab89f12202 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 29 06:56:20 compute-0 nova_compute[251877]: 2025-11-29 06:56:20.707 251881 DEBUG nova.scheduler.client.report [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Updating ProviderTree inventory for provider 36ed0248-8d04-4532-95bb-daab89f12202 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 29 06:56:20 compute-0 nova_compute[251877]: 2025-11-29 06:56:20.708 251881 DEBUG nova.compute.provider_tree [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Updating inventory in ProviderTree for provider 36ed0248-8d04-4532-95bb-daab89f12202 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 29 06:56:20 compute-0 nova_compute[251877]: 2025-11-29 06:56:20.727 251881 DEBUG nova.scheduler.client.report [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Refreshing aggregate associations for resource provider 36ed0248-8d04-4532-95bb-daab89f12202, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 29 06:56:20 compute-0 nova_compute[251877]: 2025-11-29 06:56:20.754 251881 DEBUG nova.scheduler.client.report [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Refreshing trait associations for resource provider 36ed0248-8d04-4532-95bb-daab89f12202, traits: COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSE41,HW_CPU_X86_MMX,COMPUTE_ACCELERATORS,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NODE,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSE2,COMPUTE_VOLUME_EXTEND,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 29 06:56:20 compute-0 nova_compute[251877]: 2025-11-29 06:56:20.782 251881 DEBUG oslo_concurrency.processutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 06:56:21 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:56:21 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:56:21 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:56:21.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:56:21 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1317: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:56:21 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 06:56:21 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/886911526' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 06:56:21 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:56:21 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:56:21 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:56:21.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:56:21 compute-0 nova_compute[251877]: 2025-11-29 06:56:21.269 251881 DEBUG oslo_concurrency.processutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 06:56:21 compute-0 nova_compute[251877]: 2025-11-29 06:56:21.275 251881 DEBUG nova.compute.provider_tree [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Inventory has not changed in ProviderTree for provider: 36ed0248-8d04-4532-95bb-daab89f12202 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 06:56:21 compute-0 sudo[267497]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:56:21 compute-0 sudo[267497]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:56:21 compute-0 sudo[267497]: pam_unix(sudo:session): session closed for user root
Nov 29 06:56:21 compute-0 sudo[267522]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:56:21 compute-0 sudo[267522]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:56:21 compute-0 sudo[267522]: pam_unix(sudo:session): session closed for user root
Nov 29 06:56:21 compute-0 ceph-mon[74654]: from='client.? 192.168.122.102:0/2239183731' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 06:56:21 compute-0 ceph-mon[74654]: from='client.? 192.168.122.101:0/3108214857' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 06:56:21 compute-0 ceph-mon[74654]: pgmap v1317: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:56:21 compute-0 ceph-mon[74654]: from='client.? 192.168.122.100:0/886911526' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 06:56:21 compute-0 sudo[267547]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:56:21 compute-0 sudo[267547]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:56:21 compute-0 sudo[267547]: pam_unix(sudo:session): session closed for user root
Nov 29 06:56:21 compute-0 sudo[267572]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 06:56:21 compute-0 sudo[267572]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:56:21 compute-0 nova_compute[251877]: 2025-11-29 06:56:21.662 251881 DEBUG nova.scheduler.client.report [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Inventory has not changed for provider 36ed0248-8d04-4532-95bb-daab89f12202 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 06:56:21 compute-0 nova_compute[251877]: 2025-11-29 06:56:21.665 251881 DEBUG nova.compute.resource_tracker [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 06:56:21 compute-0 nova_compute[251877]: 2025-11-29 06:56:21.665 251881 DEBUG oslo_concurrency.lockutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.130s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 06:56:22 compute-0 sshd-session[267472]: Invalid user csgoserver from 34.92.81.41 port 35586
Nov 29 06:56:22 compute-0 sudo[267572]: pam_unix(sudo:session): session closed for user root
Nov 29 06:56:22 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 06:56:22 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:56:22 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 06:56:22 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 06:56:22 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 06:56:22 compute-0 sshd-session[267472]: Received disconnect from 34.92.81.41 port 35586:11: Bye Bye [preauth]
Nov 29 06:56:22 compute-0 sshd-session[267472]: Disconnected from invalid user csgoserver 34.92.81.41 port 35586 [preauth]
Nov 29 06:56:22 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:56:22 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev 35e135d4-5986-45d0-81d5-1eff459e1465 does not exist
Nov 29 06:56:22 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev 6b0359db-83da-475b-a13c-30e41d024927 does not exist
Nov 29 06:56:22 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev aed08f34-b9cf-4b98-ae4b-25a775fb7d8b does not exist
Nov 29 06:56:22 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 06:56:22 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 06:56:22 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 06:56:22 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 06:56:22 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:56:22 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 06:56:22 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 06:56:22 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:56:22 compute-0 sudo[267629]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:56:22 compute-0 sudo[267629]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:56:22 compute-0 sudo[267629]: pam_unix(sudo:session): session closed for user root
Nov 29 06:56:23 compute-0 sudo[267654]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:56:23 compute-0 sudo[267654]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:56:23 compute-0 sudo[267654]: pam_unix(sudo:session): session closed for user root
Nov 29 06:56:23 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:56:23 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:56:23 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:56:23.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:56:23 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1318: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:56:23 compute-0 sudo[267680]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:56:23 compute-0 sudo[267680]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:56:23 compute-0 sudo[267680]: pam_unix(sudo:session): session closed for user root
Nov 29 06:56:23 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:56:23 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:56:23 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:56:23.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:56:23 compute-0 sudo[267705]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Nov 29 06:56:23 compute-0 sudo[267705]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:56:23 compute-0 podman[267772]: 2025-11-29 06:56:23.743514024 +0000 UTC m=+0.093821126 container create 9818aa1b275f3b8e15534ffa5e3e38051b89eb9c072d87d71fb418f7bee6d65b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_raman, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 29 06:56:23 compute-0 systemd[1]: Started libpod-conmon-9818aa1b275f3b8e15534ffa5e3e38051b89eb9c072d87d71fb418f7bee6d65b.scope.
Nov 29 06:56:23 compute-0 podman[267772]: 2025-11-29 06:56:23.714245751 +0000 UTC m=+0.064552843 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:56:23 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:56:23 compute-0 podman[267772]: 2025-11-29 06:56:23.845627998 +0000 UTC m=+0.195935080 container init 9818aa1b275f3b8e15534ffa5e3e38051b89eb9c072d87d71fb418f7bee6d65b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_raman, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 29 06:56:23 compute-0 podman[267772]: 2025-11-29 06:56:23.857666753 +0000 UTC m=+0.207973825 container start 9818aa1b275f3b8e15534ffa5e3e38051b89eb9c072d87d71fb418f7bee6d65b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_raman, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 06:56:23 compute-0 podman[267772]: 2025-11-29 06:56:23.861143959 +0000 UTC m=+0.211451061 container attach 9818aa1b275f3b8e15534ffa5e3e38051b89eb9c072d87d71fb418f7bee6d65b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_raman, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 29 06:56:23 compute-0 loving_raman[267788]: 167 167
Nov 29 06:56:23 compute-0 systemd[1]: libpod-9818aa1b275f3b8e15534ffa5e3e38051b89eb9c072d87d71fb418f7bee6d65b.scope: Deactivated successfully.
Nov 29 06:56:23 compute-0 conmon[267788]: conmon 9818aa1b275f3b8e1553 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9818aa1b275f3b8e15534ffa5e3e38051b89eb9c072d87d71fb418f7bee6d65b.scope/container/memory.events
Nov 29 06:56:23 compute-0 podman[267772]: 2025-11-29 06:56:23.868435652 +0000 UTC m=+0.218742764 container died 9818aa1b275f3b8e15534ffa5e3e38051b89eb9c072d87d71fb418f7bee6d65b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_raman, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 29 06:56:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-14f803234a0a9a9d62e4b6f598794b215fc053b0c29a5d5baa4a701c3835ded2-merged.mount: Deactivated successfully.
Nov 29 06:56:23 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:56:23 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 06:56:23 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 06:56:23 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:56:23 compute-0 ceph-mon[74654]: pgmap v1318: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:56:23 compute-0 podman[267772]: 2025-11-29 06:56:23.924665883 +0000 UTC m=+0.274972995 container remove 9818aa1b275f3b8e15534ffa5e3e38051b89eb9c072d87d71fb418f7bee6d65b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_raman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True)
Nov 29 06:56:23 compute-0 systemd[1]: libpod-conmon-9818aa1b275f3b8e15534ffa5e3e38051b89eb9c072d87d71fb418f7bee6d65b.scope: Deactivated successfully.
Nov 29 06:56:24 compute-0 podman[267812]: 2025-11-29 06:56:24.162852055 +0000 UTC m=+0.085321480 container create db72fac8d643735d2bfec78766de3042de1a5559fda99c3a90e16b1e9c26dcfc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_noether, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 29 06:56:24 compute-0 systemd[1]: Started libpod-conmon-db72fac8d643735d2bfec78766de3042de1a5559fda99c3a90e16b1e9c26dcfc.scope.
Nov 29 06:56:24 compute-0 podman[267812]: 2025-11-29 06:56:24.133789278 +0000 UTC m=+0.056258663 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:56:24 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:56:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/621c267a5f8763c1b61a1f660d11fb1485f8424dda1d4d736242eddd24e5ca6c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 06:56:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/621c267a5f8763c1b61a1f660d11fb1485f8424dda1d4d736242eddd24e5ca6c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:56:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/621c267a5f8763c1b61a1f660d11fb1485f8424dda1d4d736242eddd24e5ca6c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:56:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/621c267a5f8763c1b61a1f660d11fb1485f8424dda1d4d736242eddd24e5ca6c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 06:56:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/621c267a5f8763c1b61a1f660d11fb1485f8424dda1d4d736242eddd24e5ca6c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 06:56:24 compute-0 podman[267812]: 2025-11-29 06:56:24.281574691 +0000 UTC m=+0.204044046 container init db72fac8d643735d2bfec78766de3042de1a5559fda99c3a90e16b1e9c26dcfc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_noether, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 06:56:24 compute-0 podman[267812]: 2025-11-29 06:56:24.29595763 +0000 UTC m=+0.218426945 container start db72fac8d643735d2bfec78766de3042de1a5559fda99c3a90e16b1e9c26dcfc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_noether, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 29 06:56:24 compute-0 podman[267812]: 2025-11-29 06:56:24.301266918 +0000 UTC m=+0.223736283 container attach db72fac8d643735d2bfec78766de3042de1a5559fda99c3a90e16b1e9c26dcfc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_noether, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 29 06:56:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:56:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:56:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:56:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:56:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:56:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:56:24 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:56:25 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:56:25 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:56:25 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:56:25.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:56:25 compute-0 hungry_noether[267828]: --> passed data devices: 0 physical, 1 LVM
Nov 29 06:56:25 compute-0 hungry_noether[267828]: --> relative data size: 1.0
Nov 29 06:56:25 compute-0 hungry_noether[267828]: --> All data devices are unavailable
Nov 29 06:56:25 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1319: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:56:25 compute-0 systemd[1]: libpod-db72fac8d643735d2bfec78766de3042de1a5559fda99c3a90e16b1e9c26dcfc.scope: Deactivated successfully.
Nov 29 06:56:25 compute-0 podman[267812]: 2025-11-29 06:56:25.179432217 +0000 UTC m=+1.101901542 container died db72fac8d643735d2bfec78766de3042de1a5559fda99c3a90e16b1e9c26dcfc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_noether, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 29 06:56:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-621c267a5f8763c1b61a1f660d11fb1485f8424dda1d4d736242eddd24e5ca6c-merged.mount: Deactivated successfully.
Nov 29 06:56:25 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:56:25 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:56:25 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:56:25.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:56:25 compute-0 podman[267812]: 2025-11-29 06:56:25.423861981 +0000 UTC m=+1.346331316 container remove db72fac8d643735d2bfec78766de3042de1a5559fda99c3a90e16b1e9c26dcfc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_noether, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 29 06:56:25 compute-0 systemd[1]: libpod-conmon-db72fac8d643735d2bfec78766de3042de1a5559fda99c3a90e16b1e9c26dcfc.scope: Deactivated successfully.
Nov 29 06:56:25 compute-0 sudo[267705]: pam_unix(sudo:session): session closed for user root
Nov 29 06:56:25 compute-0 sudo[267853]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:56:25 compute-0 sudo[267853]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:56:25 compute-0 sudo[267853]: pam_unix(sudo:session): session closed for user root
Nov 29 06:56:25 compute-0 sudo[267878]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:56:25 compute-0 sudo[267878]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:56:25 compute-0 sudo[267878]: pam_unix(sudo:session): session closed for user root
Nov 29 06:56:25 compute-0 sudo[267903]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:56:25 compute-0 sudo[267903]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:56:25 compute-0 sudo[267903]: pam_unix(sudo:session): session closed for user root
Nov 29 06:56:25 compute-0 sudo[267928]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -- lvm list --format json
Nov 29 06:56:25 compute-0 sudo[267928]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:56:26 compute-0 podman[267995]: 2025-11-29 06:56:26.134328935 +0000 UTC m=+0.027844914 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:56:26 compute-0 podman[267995]: 2025-11-29 06:56:26.26638244 +0000 UTC m=+0.159898439 container create 1607369c730a69bc041fb44bb2deec4a813436d3f6db7a07b292548e3edd99ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_einstein, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 06:56:26 compute-0 systemd[1]: Started libpod-conmon-1607369c730a69bc041fb44bb2deec4a813436d3f6db7a07b292548e3edd99ff.scope.
Nov 29 06:56:26 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:56:26 compute-0 podman[267995]: 2025-11-29 06:56:26.35171501 +0000 UTC m=+0.245230989 container init 1607369c730a69bc041fb44bb2deec4a813436d3f6db7a07b292548e3edd99ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_einstein, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 06:56:26 compute-0 podman[267995]: 2025-11-29 06:56:26.357975954 +0000 UTC m=+0.251491923 container start 1607369c730a69bc041fb44bb2deec4a813436d3f6db7a07b292548e3edd99ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_einstein, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 06:56:26 compute-0 funny_einstein[268011]: 167 167
Nov 29 06:56:26 compute-0 podman[267995]: 2025-11-29 06:56:26.361365688 +0000 UTC m=+0.254881677 container attach 1607369c730a69bc041fb44bb2deec4a813436d3f6db7a07b292548e3edd99ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_einstein, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 29 06:56:26 compute-0 systemd[1]: libpod-1607369c730a69bc041fb44bb2deec4a813436d3f6db7a07b292548e3edd99ff.scope: Deactivated successfully.
Nov 29 06:56:26 compute-0 podman[267995]: 2025-11-29 06:56:26.362179011 +0000 UTC m=+0.255694970 container died 1607369c730a69bc041fb44bb2deec4a813436d3f6db7a07b292548e3edd99ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_einstein, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 06:56:26 compute-0 ceph-mon[74654]: pgmap v1319: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:56:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-8c21f3ca35695c0f7a50d7bb34e8fedbca99f94108739f12f5bfea8723dea9df-merged.mount: Deactivated successfully.
Nov 29 06:56:26 compute-0 podman[267995]: 2025-11-29 06:56:26.581868089 +0000 UTC m=+0.475384058 container remove 1607369c730a69bc041fb44bb2deec4a813436d3f6db7a07b292548e3edd99ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_einstein, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 29 06:56:26 compute-0 systemd[1]: libpod-conmon-1607369c730a69bc041fb44bb2deec4a813436d3f6db7a07b292548e3edd99ff.scope: Deactivated successfully.
Nov 29 06:56:26 compute-0 podman[268037]: 2025-11-29 06:56:26.800161999 +0000 UTC m=+0.048687672 container create 0839bb3b01b59df9f9a8cb7b4001321988eff82329517f4d20a5fa30ed9636d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_morse, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 06:56:26 compute-0 systemd[1]: Started libpod-conmon-0839bb3b01b59df9f9a8cb7b4001321988eff82329517f4d20a5fa30ed9636d6.scope.
Nov 29 06:56:26 compute-0 podman[268037]: 2025-11-29 06:56:26.775045852 +0000 UTC m=+0.023571615 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:56:26 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:56:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4662984d359c09dbe41aee47fbbfd76e205baf0ba40a4c73dddb7882eced013/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 06:56:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4662984d359c09dbe41aee47fbbfd76e205baf0ba40a4c73dddb7882eced013/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:56:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4662984d359c09dbe41aee47fbbfd76e205baf0ba40a4c73dddb7882eced013/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:56:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4662984d359c09dbe41aee47fbbfd76e205baf0ba40a4c73dddb7882eced013/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 06:56:26 compute-0 podman[268037]: 2025-11-29 06:56:26.920389727 +0000 UTC m=+0.168915440 container init 0839bb3b01b59df9f9a8cb7b4001321988eff82329517f4d20a5fa30ed9636d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_morse, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 06:56:26 compute-0 podman[268037]: 2025-11-29 06:56:26.930399065 +0000 UTC m=+0.178924778 container start 0839bb3b01b59df9f9a8cb7b4001321988eff82329517f4d20a5fa30ed9636d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_morse, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 06:56:26 compute-0 podman[268037]: 2025-11-29 06:56:26.934514179 +0000 UTC m=+0.183039872 container attach 0839bb3b01b59df9f9a8cb7b4001321988eff82329517f4d20a5fa30ed9636d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_morse, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 29 06:56:27 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:56:27 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:56:27 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:56:27.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:56:27 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1320: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:56:27 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:56:27 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:56:27 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:56:27.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:56:27 compute-0 amazing_morse[268053]: {
Nov 29 06:56:27 compute-0 amazing_morse[268053]:     "1": [
Nov 29 06:56:27 compute-0 amazing_morse[268053]:         {
Nov 29 06:56:27 compute-0 amazing_morse[268053]:             "devices": [
Nov 29 06:56:27 compute-0 amazing_morse[268053]:                 "/dev/loop3"
Nov 29 06:56:27 compute-0 amazing_morse[268053]:             ],
Nov 29 06:56:27 compute-0 amazing_morse[268053]:             "lv_name": "ceph_lv0",
Nov 29 06:56:27 compute-0 amazing_morse[268053]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 06:56:27 compute-0 amazing_morse[268053]:             "lv_size": "7511998464",
Nov 29 06:56:27 compute-0 amazing_morse[268053]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=336ec58c-893b-528f-a0c1-6ed1196bc047,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=91f280f1-e534-4adc-bf70-98711580c2dd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 06:56:27 compute-0 amazing_morse[268053]:             "lv_uuid": "G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP",
Nov 29 06:56:27 compute-0 amazing_morse[268053]:             "name": "ceph_lv0",
Nov 29 06:56:27 compute-0 amazing_morse[268053]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 06:56:27 compute-0 amazing_morse[268053]:             "tags": {
Nov 29 06:56:27 compute-0 amazing_morse[268053]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 06:56:27 compute-0 amazing_morse[268053]:                 "ceph.block_uuid": "G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP",
Nov 29 06:56:27 compute-0 amazing_morse[268053]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 06:56:27 compute-0 amazing_morse[268053]:                 "ceph.cluster_fsid": "336ec58c-893b-528f-a0c1-6ed1196bc047",
Nov 29 06:56:27 compute-0 amazing_morse[268053]:                 "ceph.cluster_name": "ceph",
Nov 29 06:56:27 compute-0 amazing_morse[268053]:                 "ceph.crush_device_class": "",
Nov 29 06:56:27 compute-0 amazing_morse[268053]:                 "ceph.encrypted": "0",
Nov 29 06:56:27 compute-0 amazing_morse[268053]:                 "ceph.osd_fsid": "91f280f1-e534-4adc-bf70-98711580c2dd",
Nov 29 06:56:27 compute-0 amazing_morse[268053]:                 "ceph.osd_id": "1",
Nov 29 06:56:27 compute-0 amazing_morse[268053]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 06:56:27 compute-0 amazing_morse[268053]:                 "ceph.type": "block",
Nov 29 06:56:27 compute-0 amazing_morse[268053]:                 "ceph.vdo": "0"
Nov 29 06:56:27 compute-0 amazing_morse[268053]:             },
Nov 29 06:56:27 compute-0 amazing_morse[268053]:             "type": "block",
Nov 29 06:56:27 compute-0 amazing_morse[268053]:             "vg_name": "ceph_vg0"
Nov 29 06:56:27 compute-0 amazing_morse[268053]:         }
Nov 29 06:56:27 compute-0 amazing_morse[268053]:     ]
Nov 29 06:56:27 compute-0 amazing_morse[268053]: }
Nov 29 06:56:27 compute-0 systemd[1]: libpod-0839bb3b01b59df9f9a8cb7b4001321988eff82329517f4d20a5fa30ed9636d6.scope: Deactivated successfully.
Nov 29 06:56:27 compute-0 podman[268037]: 2025-11-29 06:56:27.682970598 +0000 UTC m=+0.931496301 container died 0839bb3b01b59df9f9a8cb7b4001321988eff82329517f4d20a5fa30ed9636d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_morse, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 29 06:56:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-b4662984d359c09dbe41aee47fbbfd76e205baf0ba40a4c73dddb7882eced013-merged.mount: Deactivated successfully.
Nov 29 06:56:27 compute-0 podman[268037]: 2025-11-29 06:56:27.76518899 +0000 UTC m=+1.013714703 container remove 0839bb3b01b59df9f9a8cb7b4001321988eff82329517f4d20a5fa30ed9636d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_morse, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 06:56:27 compute-0 systemd[1]: libpod-conmon-0839bb3b01b59df9f9a8cb7b4001321988eff82329517f4d20a5fa30ed9636d6.scope: Deactivated successfully.
Nov 29 06:56:27 compute-0 sudo[267928]: pam_unix(sudo:session): session closed for user root
Nov 29 06:56:27 compute-0 ceph-mon[74654]: pgmap v1320: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:56:27 compute-0 sudo[268074]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:56:27 compute-0 sudo[268074]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:56:27 compute-0 sudo[268074]: pam_unix(sudo:session): session closed for user root
Nov 29 06:56:27 compute-0 sudo[268099]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:56:28 compute-0 sudo[268099]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:56:28 compute-0 sudo[268099]: pam_unix(sudo:session): session closed for user root
Nov 29 06:56:28 compute-0 sudo[268124]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:56:28 compute-0 sudo[268124]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:56:28 compute-0 sudo[268124]: pam_unix(sudo:session): session closed for user root
Nov 29 06:56:28 compute-0 sudo[268149]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -- raw list --format json
Nov 29 06:56:28 compute-0 sudo[268149]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:56:28 compute-0 podman[268215]: 2025-11-29 06:56:28.475599602 +0000 UTC m=+0.074408247 container create 9d0894bc3a400dedfb8dc1620f7bda76a79e45c46d2b3d28202ce5a6935eefbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_fermi, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 06:56:28 compute-0 systemd[1]: Started libpod-conmon-9d0894bc3a400dedfb8dc1620f7bda76a79e45c46d2b3d28202ce5a6935eefbd.scope.
Nov 29 06:56:28 compute-0 podman[268215]: 2025-11-29 06:56:28.433859793 +0000 UTC m=+0.032668518 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:56:28 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:56:28 compute-0 podman[268215]: 2025-11-29 06:56:28.55583735 +0000 UTC m=+0.154646045 container init 9d0894bc3a400dedfb8dc1620f7bda76a79e45c46d2b3d28202ce5a6935eefbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_fermi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 29 06:56:28 compute-0 podman[268215]: 2025-11-29 06:56:28.567300308 +0000 UTC m=+0.166108993 container start 9d0894bc3a400dedfb8dc1620f7bda76a79e45c46d2b3d28202ce5a6935eefbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_fermi, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 06:56:28 compute-0 xenodochial_fermi[268232]: 167 167
Nov 29 06:56:28 compute-0 systemd[1]: libpod-9d0894bc3a400dedfb8dc1620f7bda76a79e45c46d2b3d28202ce5a6935eefbd.scope: Deactivated successfully.
Nov 29 06:56:28 compute-0 podman[268215]: 2025-11-29 06:56:28.576471283 +0000 UTC m=+0.175279958 container attach 9d0894bc3a400dedfb8dc1620f7bda76a79e45c46d2b3d28202ce5a6935eefbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_fermi, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 29 06:56:28 compute-0 podman[268215]: 2025-11-29 06:56:28.577021958 +0000 UTC m=+0.175830593 container died 9d0894bc3a400dedfb8dc1620f7bda76a79e45c46d2b3d28202ce5a6935eefbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_fermi, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 06:56:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-5eebab067f95d6b1ec4d3f228680d5f8c36a657802fa63a404231c993c9aae38-merged.mount: Deactivated successfully.
Nov 29 06:56:28 compute-0 podman[268215]: 2025-11-29 06:56:28.682560448 +0000 UTC m=+0.281369103 container remove 9d0894bc3a400dedfb8dc1620f7bda76a79e45c46d2b3d28202ce5a6935eefbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_fermi, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 06:56:28 compute-0 systemd[1]: libpod-conmon-9d0894bc3a400dedfb8dc1620f7bda76a79e45c46d2b3d28202ce5a6935eefbd.scope: Deactivated successfully.
Nov 29 06:56:28 compute-0 podman[268256]: 2025-11-29 06:56:28.859701165 +0000 UTC m=+0.038361466 container create a05bc3d4303f4a3355d0ba9a4e9edbbae294e91ae26ea439a5cc1a4376e64d4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_bell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2)
Nov 29 06:56:28 compute-0 systemd[1]: Started libpod-conmon-a05bc3d4303f4a3355d0ba9a4e9edbbae294e91ae26ea439a5cc1a4376e64d4d.scope.
Nov 29 06:56:28 compute-0 podman[268256]: 2025-11-29 06:56:28.843071404 +0000 UTC m=+0.021731665 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:56:28 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:56:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27b181f0cf5d29f9b54a367167652028ad973de59311a04976dd6df952858118/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 06:56:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27b181f0cf5d29f9b54a367167652028ad973de59311a04976dd6df952858118/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:56:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27b181f0cf5d29f9b54a367167652028ad973de59311a04976dd6df952858118/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:56:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27b181f0cf5d29f9b54a367167652028ad973de59311a04976dd6df952858118/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 06:56:28 compute-0 podman[268256]: 2025-11-29 06:56:28.976826637 +0000 UTC m=+0.155486918 container init a05bc3d4303f4a3355d0ba9a4e9edbbae294e91ae26ea439a5cc1a4376e64d4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_bell, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 06:56:28 compute-0 podman[268256]: 2025-11-29 06:56:28.986046253 +0000 UTC m=+0.164706564 container start a05bc3d4303f4a3355d0ba9a4e9edbbae294e91ae26ea439a5cc1a4376e64d4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_bell, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 06:56:29 compute-0 podman[268256]: 2025-11-29 06:56:29.039470045 +0000 UTC m=+0.218130376 container attach a05bc3d4303f4a3355d0ba9a4e9edbbae294e91ae26ea439a5cc1a4376e64d4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_bell, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 06:56:29 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:56:29 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:56:29 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:56:29.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:56:29 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1321: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:56:29 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:56:29 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 06:56:29 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:56:29.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 06:56:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 06:56:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 06:56:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 06:56:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 06:56:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 06:56:29 compute-0 ceph-mon[74654]: pgmap v1321: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:56:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 06:56:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 06:56:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 06:56:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 06:56:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 06:56:29 compute-0 youthful_bell[268273]: {
Nov 29 06:56:29 compute-0 youthful_bell[268273]:     "91f280f1-e534-4adc-bf70-98711580c2dd": {
Nov 29 06:56:29 compute-0 youthful_bell[268273]:         "ceph_fsid": "336ec58c-893b-528f-a0c1-6ed1196bc047",
Nov 29 06:56:29 compute-0 youthful_bell[268273]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 06:56:29 compute-0 youthful_bell[268273]:         "osd_id": 1,
Nov 29 06:56:29 compute-0 youthful_bell[268273]:         "osd_uuid": "91f280f1-e534-4adc-bf70-98711580c2dd",
Nov 29 06:56:29 compute-0 youthful_bell[268273]:         "type": "bluestore"
Nov 29 06:56:29 compute-0 youthful_bell[268273]:     }
Nov 29 06:56:29 compute-0 youthful_bell[268273]: }
Nov 29 06:56:29 compute-0 systemd[1]: libpod-a05bc3d4303f4a3355d0ba9a4e9edbbae294e91ae26ea439a5cc1a4376e64d4d.scope: Deactivated successfully.
Nov 29 06:56:29 compute-0 podman[268256]: 2025-11-29 06:56:29.941129986 +0000 UTC m=+1.119790257 container died a05bc3d4303f4a3355d0ba9a4e9edbbae294e91ae26ea439a5cc1a4376e64d4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_bell, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 29 06:56:29 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:56:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-27b181f0cf5d29f9b54a367167652028ad973de59311a04976dd6df952858118-merged.mount: Deactivated successfully.
Nov 29 06:56:30 compute-0 podman[268256]: 2025-11-29 06:56:30.582003508 +0000 UTC m=+1.760663809 container remove a05bc3d4303f4a3355d0ba9a4e9edbbae294e91ae26ea439a5cc1a4376e64d4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_bell, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 06:56:30 compute-0 systemd[1]: libpod-conmon-a05bc3d4303f4a3355d0ba9a4e9edbbae294e91ae26ea439a5cc1a4376e64d4d.scope: Deactivated successfully.
Nov 29 06:56:30 compute-0 sudo[268149]: pam_unix(sudo:session): session closed for user root
Nov 29 06:56:30 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 06:56:30 compute-0 podman[268308]: 2025-11-29 06:56:30.719485634 +0000 UTC m=+0.085682009 container health_status 81ea2bcb89266a0110a379c2083d8cc042460d4a35c8ed3bf349dd1083925000 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 06:56:30 compute-0 podman[268309]: 2025-11-29 06:56:30.794091626 +0000 UTC m=+0.158733828 container health_status b3f42e9a710907b47913576d27471d163da731262c1464357cff24681ce600c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3)
Nov 29 06:56:31 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:56:31 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 06:56:31 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:56:31 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:56:31 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:56:31.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:56:31 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1322: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:56:31 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:56:31 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:56:31 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:56:31.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:56:31 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:56:31 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev 238f2672-0826-4f72-87b6-7e211a794709 does not exist
Nov 29 06:56:31 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev 8425c4c4-3b6f-4678-ad4d-b9f04aaeedfa does not exist
Nov 29 06:56:31 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev 779da6db-0c1a-4398-9914-ffdc241f16cb does not exist
Nov 29 06:56:31 compute-0 sudo[268352]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:56:31 compute-0 sudo[268352]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:56:31 compute-0 sudo[268352]: pam_unix(sudo:session): session closed for user root
Nov 29 06:56:31 compute-0 sudo[268377]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 06:56:31 compute-0 sudo[268377]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:56:31 compute-0 sudo[268377]: pam_unix(sudo:session): session closed for user root
Nov 29 06:56:33 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:56:33 compute-0 ceph-mon[74654]: pgmap v1322: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:56:33 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:56:33 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:56:33 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:56:33 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:56:33.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:56:33 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1323: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:56:33 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:56:33 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 06:56:33 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:56:33.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 06:56:34 compute-0 ceph-mon[74654]: pgmap v1323: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:56:35 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:56:35 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:56:35 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:56:35.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:56:35 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1324: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:56:35 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:56:35 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:56:35 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:56:35.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:56:35 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:56:36 compute-0 ceph-mon[74654]: pgmap v1324: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:56:37 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:56:37 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:56:37 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:56:37.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:56:37 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1325: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:56:37 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:56:37 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:56:37 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:56:37.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:56:39 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:56:39 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:56:39 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:56:39.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:56:39 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1326: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:56:39 compute-0 sudo[268406]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:56:39 compute-0 sudo[268406]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:56:39 compute-0 sudo[268406]: pam_unix(sudo:session): session closed for user root
Nov 29 06:56:39 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:56:39 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 06:56:39 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:56:39.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 06:56:39 compute-0 sudo[268431]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:56:39 compute-0 sudo[268431]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:56:39 compute-0 sudo[268431]: pam_unix(sudo:session): session closed for user root
Nov 29 06:56:40 compute-0 ceph-mon[74654]: from='client.? 192.168.122.10:0/1564014978' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 06:56:40 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:56:41 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:56:41 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:56:41 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:56:41.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:56:41 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1327: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:56:41 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:56:41 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:56:41 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:56:41.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:56:41 compute-0 ceph-mon[74654]: from='client.? 192.168.122.10:0/1564014978' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 06:56:41 compute-0 ceph-mon[74654]: pgmap v1325: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:56:41 compute-0 ceph-mon[74654]: pgmap v1326: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:56:43 compute-0 ceph-mon[74654]: pgmap v1327: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:56:43 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:56:43 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:56:43 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:56:43.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:56:43 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1328: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:56:43 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:56:43 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:56:43 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:56:43.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:56:44 compute-0 ceph-mon[74654]: pgmap v1328: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:56:45 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:56:45 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:56:45 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:56:45.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:56:45 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1329: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:56:45 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:56:45 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:56:45 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:56:45.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:56:45 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:56:46 compute-0 ceph-mon[74654]: pgmap v1329: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:56:47 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:56:47 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:56:47 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:56:47.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:56:47 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1330: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:56:47 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:56:47 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:56:47 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:56:47.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:56:47 compute-0 ceph-mon[74654]: pgmap v1330: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:56:48 compute-0 sshd-session[268460]: Invalid user temp from 162.214.92.14 port 42200
Nov 29 06:56:48 compute-0 sshd-session[268460]: Received disconnect from 162.214.92.14 port 42200:11: Bye Bye [preauth]
Nov 29 06:56:48 compute-0 sshd-session[268460]: Disconnected from invalid user temp 162.214.92.14 port 42200 [preauth]
Nov 29 06:56:49 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:56:49 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:56:49 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:56:49.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:56:49 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1331: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:56:49 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:56:49 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:56:49 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:56:49.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:56:49 compute-0 sshd-session[268463]: Invalid user packer from 197.13.24.157 port 47808
Nov 29 06:56:50 compute-0 sshd-session[268463]: Received disconnect from 197.13.24.157 port 47808:11: Bye Bye [preauth]
Nov 29 06:56:50 compute-0 sshd-session[268463]: Disconnected from invalid user packer 197.13.24.157 port 47808 [preauth]
Nov 29 06:56:50 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:56:50 compute-0 sshd-session[268465]: Received disconnect from 193.46.255.217 port 64414:11:  [preauth]
Nov 29 06:56:50 compute-0 sshd-session[268465]: Disconnected from authenticating user root 193.46.255.217 port 64414 [preauth]
Nov 29 06:56:51 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:56:51 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:56:51 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:56:51.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:56:51 compute-0 podman[268469]: 2025-11-29 06:56:51.151360506 +0000 UTC m=+0.108080751 container health_status 843911ed0b6203707f0633a7e737420fbf54d55170a2d9cdc86db1752ff76af8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd)
Nov 29 06:56:51 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1332: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:56:51 compute-0 sshd-session[268467]: Invalid user ubuntu from 103.143.238.173 port 41784
Nov 29 06:56:51 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:56:51 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:56:51 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:56:51.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:56:51 compute-0 sshd-session[268467]: Received disconnect from 103.143.238.173 port 41784:11: Bye Bye [preauth]
Nov 29 06:56:51 compute-0 sshd-session[268467]: Disconnected from invalid user ubuntu 103.143.238.173 port 41784 [preauth]
Nov 29 06:56:52 compute-0 sshd-session[268492]: Invalid user david from 193.163.72.91 port 55238
Nov 29 06:56:52 compute-0 sshd-session[268492]: Received disconnect from 193.163.72.91 port 55238:11: Bye Bye [preauth]
Nov 29 06:56:52 compute-0 sshd-session[268492]: Disconnected from invalid user david 193.163.72.91 port 55238 [preauth]
Nov 29 06:56:53 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:56:53 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:56:53 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:56:53.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:56:53 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1333: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:56:53 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:56:53 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:56:53 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:56:53.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:56:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:56:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:56:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:56:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:56:54 compute-0 ceph-mgr[74948]: [balancer INFO root] Optimize plan auto_2025-11-29_06:56:54
Nov 29 06:56:54 compute-0 ceph-mgr[74948]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 06:56:54 compute-0 ceph-mgr[74948]: [balancer INFO root] do_upmap
Nov 29 06:56:54 compute-0 ceph-mgr[74948]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.data', 'default.rgw.control', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.log', 'backups', 'vms', 'default.rgw.meta', 'volumes', 'images']
Nov 29 06:56:54 compute-0 ceph-mgr[74948]: [balancer INFO root] prepared 0/10 changes
Nov 29 06:56:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:56:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:56:55 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:56:55 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:56:55 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:56:55.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:56:55 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1334: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:56:55 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:56:55 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:56:55 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:56:55.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:56:56 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:56:56 compute-0 ceph-mon[74654]: pgmap v1331: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:56:57 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:56:57 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:56:57 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:56:57.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:56:57 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1335: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:56:57 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:56:57 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:56:57 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:56:57.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:56:58 compute-0 ceph-mon[74654]: pgmap v1332: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:56:58 compute-0 ceph-mon[74654]: pgmap v1333: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:56:58 compute-0 ceph-mon[74654]: pgmap v1334: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:56:59 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:56:59 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:56:59 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:56:59.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:56:59 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1336: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:56:59 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:56:59 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:56:59 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:56:59.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:56:59 compute-0 sudo[268500]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:56:59 compute-0 sudo[268500]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:56:59 compute-0 sudo[268500]: pam_unix(sudo:session): session closed for user root
Nov 29 06:56:59 compute-0 sudo[268525]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:56:59 compute-0 sudo[268525]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:56:59 compute-0 sudo[268525]: pam_unix(sudo:session): session closed for user root
Nov 29 06:56:59 compute-0 ceph-mon[74654]: pgmap v1335: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:57:01 compute-0 sshd-session[268497]: Invalid user tecnopos from 101.47.163.116 port 41602
Nov 29 06:57:01 compute-0 ceph-mon[74654]: pgmap v1336: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:57:01 compute-0 podman[268550]: 2025-11-29 06:57:01.112710684 +0000 UTC m=+0.094798393 container health_status 81ea2bcb89266a0110a379c2083d8cc042460d4a35c8ed3bf349dd1083925000 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 06:57:01 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:57:01 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:57:01 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:57:01.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:57:01 compute-0 podman[268551]: 2025-11-29 06:57:01.162945428 +0000 UTC m=+0.141889340 container health_status b3f42e9a710907b47913576d27471d163da731262c1464357cff24681ce600c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 06:57:01 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1337: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:57:01 compute-0 sshd-session[268497]: Received disconnect from 101.47.163.116 port 41602:11: Bye Bye [preauth]
Nov 29 06:57:01 compute-0 sshd-session[268497]: Disconnected from invalid user tecnopos 101.47.163.116 port 41602 [preauth]
Nov 29 06:57:01 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:57:01 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:57:01 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:57:01 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:57:01.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:57:02 compute-0 ceph-mon[74654]: pgmap v1337: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:57:03 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:57:03 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:57:03 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:57:03.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:57:03 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1338: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:57:03 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:57:03 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:57:03 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:57:03.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:57:03 compute-0 ceph-mon[74654]: pgmap v1338: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:57:05 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:57:05 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:57:05 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:57:05.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:57:05 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1339: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:57:05 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:57:05 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:57:05 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:57:05.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:57:05 compute-0 ceph-mon[74654]: pgmap v1339: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:57:06 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:57:07 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:57:07 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:57:07 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:57:07.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:57:07 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1340: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:57:07 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:57:07 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:57:07 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:57:07.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:57:07 compute-0 ceph-mon[74654]: pgmap v1340: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:57:09 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:57:09 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:57:09 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:57:09.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:57:09 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1341: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:57:09 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:57:09 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:57:09 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:57:09.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:57:10 compute-0 ceph-mon[74654]: pgmap v1341: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:57:11 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:57:11 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 06:57:11 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:57:11.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 06:57:11 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1342: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:57:11 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:57:11 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:57:11 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:57:11 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:57:11.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:57:12 compute-0 ceph-mon[74654]: pgmap v1342: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:57:13 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:57:13 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:57:13 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:57:13.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:57:13 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1343: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:57:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 06:57:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:57:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 06:57:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:57:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:57:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:57:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:57:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:57:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:57:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:57:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:57:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:57:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 06:57:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:57:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:57:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:57:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Nov 29 06:57:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:57:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 06:57:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:57:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:57:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:57:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 06:57:13 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:57:13 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:57:13 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:57:13.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:57:13 compute-0 ceph-mon[74654]: pgmap v1343: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:57:15 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:57:15 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:57:15 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:57:15.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:57:15 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1344: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:57:15 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:57:15 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:57:15 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:57:15.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:57:16 compute-0 ceph-mon[74654]: pgmap v1344: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:57:16 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:57:17 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:57:17 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:57:17 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:57:17.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:57:17 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1345: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:57:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:57:17.249 157767 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 06:57:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:57:17.250 157767 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 06:57:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:57:17.251 157767 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 06:57:17 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:57:17 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:57:17 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:57:17.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:57:18 compute-0 ceph-mon[74654]: pgmap v1345: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:57:19 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:57:19 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:57:19 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:57:19.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:57:19 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1346: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:57:19 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:57:19 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:57:19 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:57:19.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:57:19 compute-0 sudo[268605]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:57:19 compute-0 sudo[268605]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:57:19 compute-0 sudo[268605]: pam_unix(sudo:session): session closed for user root
Nov 29 06:57:19 compute-0 sudo[268630]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:57:19 compute-0 sudo[268630]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:57:19 compute-0 sudo[268630]: pam_unix(sudo:session): session closed for user root
Nov 29 06:57:19 compute-0 ceph-mon[74654]: pgmap v1346: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:57:21 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:57:21 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:57:21 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:57:21.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:57:21 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1347: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:57:21 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:57:21 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:57:21 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:57:21.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:57:21 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:57:21 compute-0 ceph-mon[74654]: pgmap v1347: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:57:21 compute-0 nova_compute[251877]: 2025-11-29 06:57:21.667 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 06:57:21 compute-0 nova_compute[251877]: 2025-11-29 06:57:21.667 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 06:57:21 compute-0 nova_compute[251877]: 2025-11-29 06:57:21.667 251881 DEBUG nova.compute.manager [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 06:57:21 compute-0 nova_compute[251877]: 2025-11-29 06:57:21.668 251881 DEBUG nova.compute.manager [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 06:57:22 compute-0 podman[268656]: 2025-11-29 06:57:22.083677303 +0000 UTC m=+0.053129686 container health_status 843911ed0b6203707f0633a7e737420fbf54d55170a2d9cdc86db1752ff76af8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.schema-version=1.0, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 06:57:23 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:57:23 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:57:23 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:57:23.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:57:23 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1348: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:57:23 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:57:23 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:57:23 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:57:23.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:57:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:57:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:57:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:57:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:57:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:57:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:57:24 compute-0 ceph-mon[74654]: pgmap v1348: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:57:25 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:57:25 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:57:25 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:57:25.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:57:25 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1349: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:57:25 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:57:25 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:57:25 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:57:25.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:57:25 compute-0 ceph-mon[74654]: pgmap v1349: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:57:25 compute-0 sshd-session[268677]: Received disconnect from 118.193.39.127 port 48306:11: Bye Bye [preauth]
Nov 29 06:57:25 compute-0 sshd-session[268677]: Disconnected from authenticating user root 118.193.39.127 port 48306 [preauth]
Nov 29 06:57:26 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:57:26 compute-0 nova_compute[251877]: 2025-11-29 06:57:26.762 251881 DEBUG nova.compute.manager [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 06:57:26 compute-0 nova_compute[251877]: 2025-11-29 06:57:26.762 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 06:57:26 compute-0 nova_compute[251877]: 2025-11-29 06:57:26.763 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 06:57:26 compute-0 nova_compute[251877]: 2025-11-29 06:57:26.763 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 06:57:26 compute-0 nova_compute[251877]: 2025-11-29 06:57:26.763 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 06:57:26 compute-0 nova_compute[251877]: 2025-11-29 06:57:26.763 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 06:57:26 compute-0 nova_compute[251877]: 2025-11-29 06:57:26.764 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 06:57:26 compute-0 nova_compute[251877]: 2025-11-29 06:57:26.764 251881 DEBUG nova.compute.manager [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 06:57:26 compute-0 nova_compute[251877]: 2025-11-29 06:57:26.764 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 06:57:27 compute-0 nova_compute[251877]: 2025-11-29 06:57:27.172 251881 DEBUG oslo_concurrency.lockutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 06:57:27 compute-0 nova_compute[251877]: 2025-11-29 06:57:27.173 251881 DEBUG oslo_concurrency.lockutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 06:57:27 compute-0 nova_compute[251877]: 2025-11-29 06:57:27.173 251881 DEBUG oslo_concurrency.lockutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 06:57:27 compute-0 nova_compute[251877]: 2025-11-29 06:57:27.173 251881 DEBUG nova.compute.resource_tracker [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 06:57:27 compute-0 nova_compute[251877]: 2025-11-29 06:57:27.174 251881 DEBUG oslo_concurrency.processutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 06:57:27 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:57:27 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:57:27 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:57:27.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:57:27 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1350: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:57:27 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:57:27 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:57:27 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:57:27.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:57:27 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 06:57:27 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1769306037' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 06:57:27 compute-0 nova_compute[251877]: 2025-11-29 06:57:27.795 251881 DEBUG oslo_concurrency.processutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.622s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 06:57:27 compute-0 ceph-mon[74654]: pgmap v1350: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:57:27 compute-0 nova_compute[251877]: 2025-11-29 06:57:27.949 251881 WARNING nova.virt.libvirt.driver [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 06:57:27 compute-0 nova_compute[251877]: 2025-11-29 06:57:27.951 251881 DEBUG nova.compute.resource_tracker [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5195MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 06:57:27 compute-0 nova_compute[251877]: 2025-11-29 06:57:27.951 251881 DEBUG oslo_concurrency.lockutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 06:57:27 compute-0 nova_compute[251877]: 2025-11-29 06:57:27.951 251881 DEBUG oslo_concurrency.lockutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 06:57:28 compute-0 nova_compute[251877]: 2025-11-29 06:57:28.314 251881 DEBUG nova.compute.resource_tracker [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 06:57:28 compute-0 nova_compute[251877]: 2025-11-29 06:57:28.314 251881 DEBUG nova.compute.resource_tracker [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 06:57:28 compute-0 nova_compute[251877]: 2025-11-29 06:57:28.331 251881 DEBUG oslo_concurrency.processutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 06:57:28 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 06:57:28 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/526359832' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 06:57:28 compute-0 nova_compute[251877]: 2025-11-29 06:57:28.755 251881 DEBUG oslo_concurrency.processutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 06:57:28 compute-0 nova_compute[251877]: 2025-11-29 06:57:28.760 251881 DEBUG nova.compute.provider_tree [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Inventory has not changed in ProviderTree for provider: 36ed0248-8d04-4532-95bb-daab89f12202 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 06:57:28 compute-0 nova_compute[251877]: 2025-11-29 06:57:28.881 251881 DEBUG nova.scheduler.client.report [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Inventory has not changed for provider 36ed0248-8d04-4532-95bb-daab89f12202 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 06:57:28 compute-0 nova_compute[251877]: 2025-11-29 06:57:28.882 251881 DEBUG nova.compute.resource_tracker [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 06:57:28 compute-0 nova_compute[251877]: 2025-11-29 06:57:28.883 251881 DEBUG oslo_concurrency.lockutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.931s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 06:57:29 compute-0 ceph-mon[74654]: from='client.? 192.168.122.101:0/2616126904' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 06:57:29 compute-0 ceph-mon[74654]: from='client.? 192.168.122.100:0/1769306037' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 06:57:29 compute-0 ceph-mon[74654]: from='client.? 192.168.122.102:0/2389936592' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 06:57:29 compute-0 ceph-mon[74654]: from='client.? 192.168.122.101:0/3901645275' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 06:57:29 compute-0 ceph-mon[74654]: from='client.? 192.168.122.100:0/526359832' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 06:57:29 compute-0 ceph-mon[74654]: from='client.? 192.168.122.102:0/1141100869' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 06:57:29 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:57:29 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:57:29 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:57:29.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:57:29 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1351: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:57:29 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:57:29 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:57:29 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:57:29.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:57:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 06:57:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 06:57:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 06:57:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 06:57:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 06:57:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 06:57:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 06:57:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 06:57:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 06:57:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 06:57:30 compute-0 ceph-mon[74654]: pgmap v1351: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:57:31 compute-0 nova_compute[251877]: 2025-11-29 06:57:31.169 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 06:57:31 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:57:31 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1352: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:57:31 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:57:31 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:57:31.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:57:31 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:57:31 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:57:31 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:57:31.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:57:31 compute-0 nova_compute[251877]: 2025-11-29 06:57:31.337 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 06:57:31 compute-0 nova_compute[251877]: 2025-11-29 06:57:31.338 251881 DEBUG nova.compute.manager [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 06:57:31 compute-0 nova_compute[251877]: 2025-11-29 06:57:31.338 251881 DEBUG nova.compute.manager [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 06:57:31 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:57:31 compute-0 nova_compute[251877]: 2025-11-29 06:57:31.506 251881 DEBUG nova.compute.manager [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 06:57:31 compute-0 nova_compute[251877]: 2025-11-29 06:57:31.507 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 06:57:31 compute-0 sudo[268728]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:57:31 compute-0 sudo[268728]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:57:31 compute-0 sudo[268728]: pam_unix(sudo:session): session closed for user root
Nov 29 06:57:31 compute-0 sudo[268765]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:57:31 compute-0 sudo[268765]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:57:31 compute-0 sudo[268765]: pam_unix(sudo:session): session closed for user root
Nov 29 06:57:31 compute-0 podman[268752]: 2025-11-29 06:57:31.967592811 +0000 UTC m=+0.053183117 container health_status 81ea2bcb89266a0110a379c2083d8cc042460d4a35c8ed3bf349dd1083925000 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 29 06:57:31 compute-0 podman[268753]: 2025-11-29 06:57:31.991167756 +0000 UTC m=+0.075988931 container health_status b3f42e9a710907b47913576d27471d163da731262c1464357cff24681ce600c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller)
Nov 29 06:57:32 compute-0 sudo[268816]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:57:32 compute-0 sudo[268816]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:57:32 compute-0 sudo[268816]: pam_unix(sudo:session): session closed for user root
Nov 29 06:57:32 compute-0 sudo[268847]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 06:57:32 compute-0 sudo[268847]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:57:32 compute-0 ceph-mon[74654]: pgmap v1352: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:57:32 compute-0 sudo[268847]: pam_unix(sudo:session): session closed for user root
Nov 29 06:57:32 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 06:57:32 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:57:32 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 06:57:32 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 06:57:32 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 06:57:32 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:57:32 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev 20167e10-ca0a-443e-9dd6-d627c331691a does not exist
Nov 29 06:57:32 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev c8f264a3-2e06-4f3e-91c0-309ea57b7fbd does not exist
Nov 29 06:57:32 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev 51bda47d-7748-457e-9df3-357005b1281d does not exist
Nov 29 06:57:32 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 06:57:32 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 06:57:32 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 06:57:32 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 06:57:32 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 06:57:32 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:57:32 compute-0 sudo[268903]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:57:32 compute-0 sudo[268903]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:57:32 compute-0 sudo[268903]: pam_unix(sudo:session): session closed for user root
Nov 29 06:57:32 compute-0 sudo[268928]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:57:32 compute-0 sudo[268928]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:57:32 compute-0 sudo[268928]: pam_unix(sudo:session): session closed for user root
Nov 29 06:57:33 compute-0 sudo[268953]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:57:33 compute-0 sudo[268953]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:57:33 compute-0 sudo[268953]: pam_unix(sudo:session): session closed for user root
Nov 29 06:57:33 compute-0 sudo[268978]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Nov 29 06:57:33 compute-0 sudo[268978]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:57:33 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1353: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:57:33 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:57:33 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:57:33 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:57:33.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:57:33 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:57:33 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:57:33 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:57:33.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:57:33 compute-0 podman[269045]: 2025-11-29 06:57:33.513562799 +0000 UTC m=+0.041371160 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:57:33 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:57:33 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 06:57:33 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:57:33 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 06:57:33 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 06:57:33 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:57:33 compute-0 podman[269045]: 2025-11-29 06:57:33.85329451 +0000 UTC m=+0.381102791 container create bb5e782bcd1f1d815c5ee99029f6c13c502320ba3f23dbbb640c112d6bef065c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_tesla, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 06:57:33 compute-0 systemd[1]: Started libpod-conmon-bb5e782bcd1f1d815c5ee99029f6c13c502320ba3f23dbbb640c112d6bef065c.scope.
Nov 29 06:57:33 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:57:34 compute-0 podman[269045]: 2025-11-29 06:57:34.019352589 +0000 UTC m=+0.547160880 container init bb5e782bcd1f1d815c5ee99029f6c13c502320ba3f23dbbb640c112d6bef065c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_tesla, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 29 06:57:34 compute-0 podman[269045]: 2025-11-29 06:57:34.025628974 +0000 UTC m=+0.553437245 container start bb5e782bcd1f1d815c5ee99029f6c13c502320ba3f23dbbb640c112d6bef065c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_tesla, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 06:57:34 compute-0 systemd[1]: libpod-bb5e782bcd1f1d815c5ee99029f6c13c502320ba3f23dbbb640c112d6bef065c.scope: Deactivated successfully.
Nov 29 06:57:34 compute-0 lucid_tesla[269061]: 167 167
Nov 29 06:57:34 compute-0 conmon[269061]: conmon bb5e782bcd1f1d815c5e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bb5e782bcd1f1d815c5ee99029f6c13c502320ba3f23dbbb640c112d6bef065c.scope/container/memory.events
Nov 29 06:57:34 compute-0 podman[269045]: 2025-11-29 06:57:34.060585935 +0000 UTC m=+0.588394226 container attach bb5e782bcd1f1d815c5ee99029f6c13c502320ba3f23dbbb640c112d6bef065c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_tesla, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 06:57:34 compute-0 podman[269045]: 2025-11-29 06:57:34.064223195 +0000 UTC m=+0.592031476 container died bb5e782bcd1f1d815c5ee99029f6c13c502320ba3f23dbbb640c112d6bef065c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_tesla, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 06:57:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-a71e8f1a99914d4ec1ede94c28a7b36f7b27f448fd8e2662a7d481aa2b8c9a97-merged.mount: Deactivated successfully.
Nov 29 06:57:34 compute-0 podman[269045]: 2025-11-29 06:57:34.202992448 +0000 UTC m=+0.730800719 container remove bb5e782bcd1f1d815c5ee99029f6c13c502320ba3f23dbbb640c112d6bef065c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_tesla, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 29 06:57:34 compute-0 systemd[1]: libpod-conmon-bb5e782bcd1f1d815c5ee99029f6c13c502320ba3f23dbbb640c112d6bef065c.scope: Deactivated successfully.
Nov 29 06:57:34 compute-0 podman[269086]: 2025-11-29 06:57:34.35289487 +0000 UTC m=+0.034819898 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:57:34 compute-0 podman[269086]: 2025-11-29 06:57:34.493964116 +0000 UTC m=+0.175889164 container create e4925a628528e38d86ee2a9dadcacbedcce68be727c951bebc12ce41b10447eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_ellis, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default)
Nov 29 06:57:34 compute-0 systemd[1]: Started libpod-conmon-e4925a628528e38d86ee2a9dadcacbedcce68be727c951bebc12ce41b10447eb.scope.
Nov 29 06:57:34 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:57:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b4c7dad47dba606e39eed5f6b80e0b4480a301b1cc4a52c2f94ac342fb1e1e6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 06:57:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b4c7dad47dba606e39eed5f6b80e0b4480a301b1cc4a52c2f94ac342fb1e1e6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:57:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b4c7dad47dba606e39eed5f6b80e0b4480a301b1cc4a52c2f94ac342fb1e1e6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:57:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b4c7dad47dba606e39eed5f6b80e0b4480a301b1cc4a52c2f94ac342fb1e1e6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 06:57:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b4c7dad47dba606e39eed5f6b80e0b4480a301b1cc4a52c2f94ac342fb1e1e6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 06:57:34 compute-0 podman[269086]: 2025-11-29 06:57:34.666990119 +0000 UTC m=+0.348915147 container init e4925a628528e38d86ee2a9dadcacbedcce68be727c951bebc12ce41b10447eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_ellis, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 29 06:57:34 compute-0 ceph-mon[74654]: pgmap v1353: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:57:34 compute-0 podman[269086]: 2025-11-29 06:57:34.676702299 +0000 UTC m=+0.358627297 container start e4925a628528e38d86ee2a9dadcacbedcce68be727c951bebc12ce41b10447eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_ellis, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 06:57:34 compute-0 podman[269086]: 2025-11-29 06:57:34.707506074 +0000 UTC m=+0.389431112 container attach e4925a628528e38d86ee2a9dadcacbedcce68be727c951bebc12ce41b10447eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_ellis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 06:57:35 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1354: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:57:35 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:57:35 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:57:35 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:57:35.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:57:35 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:57:35 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:57:35 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:57:35.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:57:35 compute-0 sweet_ellis[269103]: --> passed data devices: 0 physical, 1 LVM
Nov 29 06:57:35 compute-0 sweet_ellis[269103]: --> relative data size: 1.0
Nov 29 06:57:35 compute-0 sweet_ellis[269103]: --> All data devices are unavailable
Nov 29 06:57:35 compute-0 systemd[1]: libpod-e4925a628528e38d86ee2a9dadcacbedcce68be727c951bebc12ce41b10447eb.scope: Deactivated successfully.
Nov 29 06:57:35 compute-0 podman[269086]: 2025-11-29 06:57:35.546707492 +0000 UTC m=+1.228632550 container died e4925a628528e38d86ee2a9dadcacbedcce68be727c951bebc12ce41b10447eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_ellis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 29 06:57:35 compute-0 ceph-mon[74654]: pgmap v1354: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:57:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-4b4c7dad47dba606e39eed5f6b80e0b4480a301b1cc4a52c2f94ac342fb1e1e6-merged.mount: Deactivated successfully.
Nov 29 06:57:36 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:57:36 compute-0 sshd-session[269109]: Invalid user usuario2 from 103.31.39.143 port 43432
Nov 29 06:57:36 compute-0 sshd-session[269109]: Received disconnect from 103.31.39.143 port 43432:11: Bye Bye [preauth]
Nov 29 06:57:36 compute-0 sshd-session[269109]: Disconnected from invalid user usuario2 103.31.39.143 port 43432 [preauth]
Nov 29 06:57:37 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1355: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:57:37 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:57:37 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:57:37 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:57:37.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:57:37 compute-0 podman[269086]: 2025-11-29 06:57:37.212943038 +0000 UTC m=+2.894868076 container remove e4925a628528e38d86ee2a9dadcacbedcce68be727c951bebc12ce41b10447eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_ellis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 29 06:57:37 compute-0 systemd[1]: libpod-conmon-e4925a628528e38d86ee2a9dadcacbedcce68be727c951bebc12ce41b10447eb.scope: Deactivated successfully.
Nov 29 06:57:37 compute-0 sudo[268978]: pam_unix(sudo:session): session closed for user root
Nov 29 06:57:37 compute-0 sudo[269136]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:57:37 compute-0 sudo[269136]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:57:37 compute-0 sudo[269136]: pam_unix(sudo:session): session closed for user root
Nov 29 06:57:37 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:57:37 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:57:37 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:57:37.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:57:37 compute-0 sudo[269161]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:57:37 compute-0 sudo[269161]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:57:37 compute-0 sudo[269161]: pam_unix(sudo:session): session closed for user root
Nov 29 06:57:37 compute-0 sudo[269186]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:57:37 compute-0 sudo[269186]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:57:37 compute-0 sudo[269186]: pam_unix(sudo:session): session closed for user root
Nov 29 06:57:37 compute-0 sudo[269211]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -- lvm list --format json
Nov 29 06:57:37 compute-0 sudo[269211]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:57:37 compute-0 podman[269276]: 2025-11-29 06:57:37.918729371 +0000 UTC m=+0.108120842 container create 66f0eb4eadc5e32d666a6487b591d746ec81a301750f3f263a5efadbd68464ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_proskuriakova, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 06:57:37 compute-0 podman[269276]: 2025-11-29 06:57:37.8423076 +0000 UTC m=+0.031699131 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:57:37 compute-0 systemd[1]: Started libpod-conmon-66f0eb4eadc5e32d666a6487b591d746ec81a301750f3f263a5efadbd68464ae.scope.
Nov 29 06:57:37 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:57:38 compute-0 podman[269276]: 2025-11-29 06:57:38.013029929 +0000 UTC m=+0.202421400 container init 66f0eb4eadc5e32d666a6487b591d746ec81a301750f3f263a5efadbd68464ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_proskuriakova, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 29 06:57:38 compute-0 podman[269276]: 2025-11-29 06:57:38.022389779 +0000 UTC m=+0.211781250 container start 66f0eb4eadc5e32d666a6487b591d746ec81a301750f3f263a5efadbd68464ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_proskuriakova, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 06:57:38 compute-0 podman[269276]: 2025-11-29 06:57:38.026308568 +0000 UTC m=+0.215700039 container attach 66f0eb4eadc5e32d666a6487b591d746ec81a301750f3f263a5efadbd68464ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_proskuriakova, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 06:57:38 compute-0 great_proskuriakova[269292]: 167 167
Nov 29 06:57:38 compute-0 systemd[1]: libpod-66f0eb4eadc5e32d666a6487b591d746ec81a301750f3f263a5efadbd68464ae.scope: Deactivated successfully.
Nov 29 06:57:38 compute-0 podman[269276]: 2025-11-29 06:57:38.030206826 +0000 UTC m=+0.219598327 container died 66f0eb4eadc5e32d666a6487b591d746ec81a301750f3f263a5efadbd68464ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_proskuriakova, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 29 06:57:38 compute-0 ceph-mon[74654]: from='client.? 192.168.122.10:0/2021761786' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 06:57:38 compute-0 ceph-mon[74654]: from='client.? 192.168.122.10:0/2021761786' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 06:57:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-463c3b21b02fcb1945181f96e2b455622477f5dd9b1824765528adcba36b455f-merged.mount: Deactivated successfully.
Nov 29 06:57:39 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1356: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:57:39 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:57:39 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:57:39 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:57:39.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:57:39 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:57:39 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:57:39 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:57:39.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:57:39 compute-0 sudo[269312]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:57:39 compute-0 sudo[269312]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:57:39 compute-0 sudo[269312]: pam_unix(sudo:session): session closed for user root
Nov 29 06:57:39 compute-0 sudo[269337]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:57:39 compute-0 sudo[269337]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:57:39 compute-0 sudo[269337]: pam_unix(sudo:session): session closed for user root
Nov 29 06:57:40 compute-0 ceph-mon[74654]: pgmap v1355: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:57:40 compute-0 podman[269276]: 2025-11-29 06:57:40.183728949 +0000 UTC m=+2.373120450 container remove 66f0eb4eadc5e32d666a6487b591d746ec81a301750f3f263a5efadbd68464ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_proskuriakova, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef)
Nov 29 06:57:40 compute-0 systemd[1]: libpod-conmon-66f0eb4eadc5e32d666a6487b591d746ec81a301750f3f263a5efadbd68464ae.scope: Deactivated successfully.
Nov 29 06:57:40 compute-0 podman[269369]: 2025-11-29 06:57:40.393176795 +0000 UTC m=+0.045073972 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:57:40 compute-0 sshd-session[269310]: Invalid user hadoop from 34.92.81.41 port 37922
Nov 29 06:57:40 compute-0 sshd-session[269310]: Received disconnect from 34.92.81.41 port 37922:11: Bye Bye [preauth]
Nov 29 06:57:40 compute-0 sshd-session[269310]: Disconnected from invalid user hadoop 34.92.81.41 port 37922 [preauth]
Nov 29 06:57:41 compute-0 podman[269369]: 2025-11-29 06:57:41.065435287 +0000 UTC m=+0.717332434 container create efd828ad2cec880b8fc410cb8a64dfac5a495a419411715794e4399be50e0080 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_mestorf, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 06:57:41 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1357: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:57:41 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:57:41 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:57:41 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:57:41.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:57:41 compute-0 systemd[1]: Started libpod-conmon-efd828ad2cec880b8fc410cb8a64dfac5a495a419411715794e4399be50e0080.scope.
Nov 29 06:57:41 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:57:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65f341a8619184dcc9a322f97408c6659eab86dd02ce96f53f8094aa256a4729/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 06:57:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65f341a8619184dcc9a322f97408c6659eab86dd02ce96f53f8094aa256a4729/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:57:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65f341a8619184dcc9a322f97408c6659eab86dd02ce96f53f8094aa256a4729/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:57:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65f341a8619184dcc9a322f97408c6659eab86dd02ce96f53f8094aa256a4729/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 06:57:41 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:57:41 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:57:41 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:57:41.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:57:41 compute-0 ceph-mon[74654]: pgmap v1356: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:57:41 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:57:41 compute-0 podman[269369]: 2025-11-29 06:57:41.725558422 +0000 UTC m=+1.377455599 container init efd828ad2cec880b8fc410cb8a64dfac5a495a419411715794e4399be50e0080 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_mestorf, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 29 06:57:41 compute-0 podman[269369]: 2025-11-29 06:57:41.73557491 +0000 UTC m=+1.387472077 container start efd828ad2cec880b8fc410cb8a64dfac5a495a419411715794e4399be50e0080 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_mestorf, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default)
Nov 29 06:57:42 compute-0 sshd-session[269384]: Received disconnect from 176.109.67.96 port 54088:11: Bye Bye [preauth]
Nov 29 06:57:42 compute-0 sshd-session[269384]: Disconnected from authenticating user root 176.109.67.96 port 54088 [preauth]
Nov 29 06:57:42 compute-0 intelligent_mestorf[269388]: {
Nov 29 06:57:42 compute-0 intelligent_mestorf[269388]:     "1": [
Nov 29 06:57:42 compute-0 intelligent_mestorf[269388]:         {
Nov 29 06:57:42 compute-0 intelligent_mestorf[269388]:             "devices": [
Nov 29 06:57:42 compute-0 intelligent_mestorf[269388]:                 "/dev/loop3"
Nov 29 06:57:42 compute-0 intelligent_mestorf[269388]:             ],
Nov 29 06:57:42 compute-0 intelligent_mestorf[269388]:             "lv_name": "ceph_lv0",
Nov 29 06:57:42 compute-0 intelligent_mestorf[269388]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 06:57:42 compute-0 intelligent_mestorf[269388]:             "lv_size": "7511998464",
Nov 29 06:57:42 compute-0 intelligent_mestorf[269388]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=336ec58c-893b-528f-a0c1-6ed1196bc047,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=91f280f1-e534-4adc-bf70-98711580c2dd,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 06:57:42 compute-0 intelligent_mestorf[269388]:             "lv_uuid": "G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP",
Nov 29 06:57:42 compute-0 intelligent_mestorf[269388]:             "name": "ceph_lv0",
Nov 29 06:57:42 compute-0 intelligent_mestorf[269388]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 06:57:42 compute-0 intelligent_mestorf[269388]:             "tags": {
Nov 29 06:57:42 compute-0 intelligent_mestorf[269388]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 06:57:42 compute-0 intelligent_mestorf[269388]:                 "ceph.block_uuid": "G2LOnV-vbos-bbgd-X40o-GlAt-RRjQ-VC1qMP",
Nov 29 06:57:42 compute-0 intelligent_mestorf[269388]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 06:57:42 compute-0 intelligent_mestorf[269388]:                 "ceph.cluster_fsid": "336ec58c-893b-528f-a0c1-6ed1196bc047",
Nov 29 06:57:42 compute-0 intelligent_mestorf[269388]:                 "ceph.cluster_name": "ceph",
Nov 29 06:57:42 compute-0 intelligent_mestorf[269388]:                 "ceph.crush_device_class": "",
Nov 29 06:57:42 compute-0 intelligent_mestorf[269388]:                 "ceph.encrypted": "0",
Nov 29 06:57:42 compute-0 intelligent_mestorf[269388]:                 "ceph.osd_fsid": "91f280f1-e534-4adc-bf70-98711580c2dd",
Nov 29 06:57:42 compute-0 intelligent_mestorf[269388]:                 "ceph.osd_id": "1",
Nov 29 06:57:42 compute-0 intelligent_mestorf[269388]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 06:57:42 compute-0 intelligent_mestorf[269388]:                 "ceph.type": "block",
Nov 29 06:57:42 compute-0 intelligent_mestorf[269388]:                 "ceph.vdo": "0"
Nov 29 06:57:42 compute-0 intelligent_mestorf[269388]:             },
Nov 29 06:57:42 compute-0 intelligent_mestorf[269388]:             "type": "block",
Nov 29 06:57:42 compute-0 intelligent_mestorf[269388]:             "vg_name": "ceph_vg0"
Nov 29 06:57:42 compute-0 intelligent_mestorf[269388]:         }
Nov 29 06:57:42 compute-0 intelligent_mestorf[269388]:     ]
Nov 29 06:57:42 compute-0 intelligent_mestorf[269388]: }
Nov 29 06:57:42 compute-0 systemd[1]: libpod-efd828ad2cec880b8fc410cb8a64dfac5a495a419411715794e4399be50e0080.scope: Deactivated successfully.
Nov 29 06:57:42 compute-0 podman[269369]: 2025-11-29 06:57:42.598017603 +0000 UTC m=+2.249914770 container attach efd828ad2cec880b8fc410cb8a64dfac5a495a419411715794e4399be50e0080 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_mestorf, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 06:57:42 compute-0 podman[269369]: 2025-11-29 06:57:42.601314714 +0000 UTC m=+2.253211951 container died efd828ad2cec880b8fc410cb8a64dfac5a495a419411715794e4399be50e0080 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_mestorf, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 29 06:57:43 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1358: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:57:43 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:57:43 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:57:43 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:57:43.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:57:43 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:57:43 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:57:43 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:57:43.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:57:45 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1359: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:57:45 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:57:45 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:57:45 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:57:45.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:57:45 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:57:45 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:57:45 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:57:45.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:57:45 compute-0 ceph-mon[74654]: pgmap v1357: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:57:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-65f341a8619184dcc9a322f97408c6659eab86dd02ce96f53f8094aa256a4729-merged.mount: Deactivated successfully.
Nov 29 06:57:45 compute-0 podman[269369]: 2025-11-29 06:57:45.787474986 +0000 UTC m=+5.439372133 container remove efd828ad2cec880b8fc410cb8a64dfac5a495a419411715794e4399be50e0080 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_mestorf, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 29 06:57:45 compute-0 systemd[1]: libpod-conmon-efd828ad2cec880b8fc410cb8a64dfac5a495a419411715794e4399be50e0080.scope: Deactivated successfully.
Nov 29 06:57:45 compute-0 sudo[269211]: pam_unix(sudo:session): session closed for user root
Nov 29 06:57:45 compute-0 sudo[269414]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:57:45 compute-0 sudo[269414]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:57:45 compute-0 sudo[269414]: pam_unix(sudo:session): session closed for user root
Nov 29 06:57:45 compute-0 sudo[269439]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:57:45 compute-0 sudo[269439]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:57:45 compute-0 sudo[269439]: pam_unix(sudo:session): session closed for user root
Nov 29 06:57:46 compute-0 sudo[269464]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:57:46 compute-0 sudo[269464]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:57:46 compute-0 sudo[269464]: pam_unix(sudo:session): session closed for user root
Nov 29 06:57:46 compute-0 sudo[269489]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 336ec58c-893b-528f-a0c1-6ed1196bc047 -- raw list --format json
Nov 29 06:57:46 compute-0 sudo[269489]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:57:46 compute-0 podman[269553]: 2025-11-29 06:57:46.375046679 +0000 UTC m=+0.022847096 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:57:47 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1360: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:57:47 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:57:47 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:57:47 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:57:47.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:57:47 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:57:47 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:57:47 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:57:47 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:57:47.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:57:47 compute-0 podman[269553]: 2025-11-29 06:57:47.469218413 +0000 UTC m=+1.117018820 container create 056e18b443a8b40bff70083a71f8f896ad4cde5bb312172fe02ef129efc87f87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_perlman, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 06:57:47 compute-0 ceph-mon[74654]: pgmap v1358: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:57:47 compute-0 ceph-mon[74654]: pgmap v1359: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:57:48 compute-0 systemd[1]: Started libpod-conmon-056e18b443a8b40bff70083a71f8f896ad4cde5bb312172fe02ef129efc87f87.scope.
Nov 29 06:57:48 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:57:48 compute-0 podman[269553]: 2025-11-29 06:57:48.435172769 +0000 UTC m=+2.082973186 container init 056e18b443a8b40bff70083a71f8f896ad4cde5bb312172fe02ef129efc87f87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_perlman, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 06:57:48 compute-0 podman[269553]: 2025-11-29 06:57:48.442541763 +0000 UTC m=+2.090342180 container start 056e18b443a8b40bff70083a71f8f896ad4cde5bb312172fe02ef129efc87f87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_perlman, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 06:57:48 compute-0 clever_perlman[269572]: 167 167
Nov 29 06:57:48 compute-0 systemd[1]: libpod-056e18b443a8b40bff70083a71f8f896ad4cde5bb312172fe02ef129efc87f87.scope: Deactivated successfully.
Nov 29 06:57:48 compute-0 sshd-session[269568]: Invalid user cloudera from 49.247.35.31 port 51628
Nov 29 06:57:48 compute-0 sshd-session[269568]: Received disconnect from 49.247.35.31 port 51628:11: Bye Bye [preauth]
Nov 29 06:57:48 compute-0 sshd-session[269568]: Disconnected from invalid user cloudera 49.247.35.31 port 51628 [preauth]
Nov 29 06:57:48 compute-0 podman[269553]: 2025-11-29 06:57:48.922146259 +0000 UTC m=+2.569946706 container attach 056e18b443a8b40bff70083a71f8f896ad4cde5bb312172fe02ef129efc87f87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_perlman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 29 06:57:48 compute-0 podman[269553]: 2025-11-29 06:57:48.923346122 +0000 UTC m=+2.571146559 container died 056e18b443a8b40bff70083a71f8f896ad4cde5bb312172fe02ef129efc87f87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_perlman, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 29 06:57:49 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1361: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:57:49 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:57:49 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:57:49 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:57:49.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:57:49 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:57:49 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:57:49 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:57:49.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:57:49 compute-0 ceph-mon[74654]: pgmap v1360: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:57:51 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1362: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:57:51 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:57:51 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:57:51 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:57:51.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:57:51 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:57:51 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:57:51 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:57:51.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:57:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-c18a208ade45cba741bcbbf9e70905ae1027b330909e9ff04653a927a455ea84-merged.mount: Deactivated successfully.
Nov 29 06:57:53 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:57:53 compute-0 ceph-mon[74654]: pgmap v1361: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:57:53 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1363: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:57:53 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:57:53 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:57:53 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:57:53.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:57:53 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:57:53 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:57:53 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:57:53.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:57:54 compute-0 podman[269553]: 2025-11-29 06:57:54.013610824 +0000 UTC m=+7.661411261 container remove 056e18b443a8b40bff70083a71f8f896ad4cde5bb312172fe02ef129efc87f87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_perlman, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 06:57:54 compute-0 systemd[1]: libpod-conmon-056e18b443a8b40bff70083a71f8f896ad4cde5bb312172fe02ef129efc87f87.scope: Deactivated successfully.
Nov 29 06:57:54 compute-0 podman[269593]: 2025-11-29 06:57:54.131043814 +0000 UTC m=+1.086252177 container health_status 843911ed0b6203707f0633a7e737420fbf54d55170a2d9cdc86db1752ff76af8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 29 06:57:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:57:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:57:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:57:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:57:54 compute-0 ceph-mgr[74948]: [balancer INFO root] Optimize plan auto_2025-11-29_06:57:54
Nov 29 06:57:54 compute-0 ceph-mgr[74948]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 06:57:54 compute-0 ceph-mgr[74948]: [balancer INFO root] do_upmap
Nov 29 06:57:54 compute-0 ceph-mgr[74948]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.data', 'default.rgw.control', '.rgw.root', 'images', 'volumes', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.meta', 'default.rgw.log', 'backups']
Nov 29 06:57:54 compute-0 ceph-mgr[74948]: [balancer INFO root] prepared 0/10 changes
Nov 29 06:57:54 compute-0 podman[269621]: 2025-11-29 06:57:54.254347618 +0000 UTC m=+0.033605324 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 06:57:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:57:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:57:54 compute-0 ceph-mon[74654]: pgmap v1362: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:57:54 compute-0 ceph-mon[74654]: pgmap v1363: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:57:54 compute-0 podman[269621]: 2025-11-29 06:57:54.750524492 +0000 UTC m=+0.529782118 container create f871da8aa383fdb1a0a4843958e16cbb9b3b1d4f929381e66492177273e4e7a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_ride, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 06:57:54 compute-0 systemd[1]: Started libpod-conmon-f871da8aa383fdb1a0a4843958e16cbb9b3b1d4f929381e66492177273e4e7a7.scope.
Nov 29 06:57:54 compute-0 systemd[1]: Started libcrun container.
Nov 29 06:57:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf175aff26530b0afc9a48654857fb4c1438458a861184a74d9f74234298d7fe/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 06:57:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf175aff26530b0afc9a48654857fb4c1438458a861184a74d9f74234298d7fe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 06:57:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf175aff26530b0afc9a48654857fb4c1438458a861184a74d9f74234298d7fe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 06:57:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf175aff26530b0afc9a48654857fb4c1438458a861184a74d9f74234298d7fe/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 06:57:54 compute-0 podman[269621]: 2025-11-29 06:57:54.960767148 +0000 UTC m=+0.740024774 container init f871da8aa383fdb1a0a4843958e16cbb9b3b1d4f929381e66492177273e4e7a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_ride, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 06:57:54 compute-0 podman[269621]: 2025-11-29 06:57:54.968011089 +0000 UTC m=+0.747268725 container start f871da8aa383fdb1a0a4843958e16cbb9b3b1d4f929381e66492177273e4e7a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_ride, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 06:57:55 compute-0 podman[269621]: 2025-11-29 06:57:55.025167746 +0000 UTC m=+0.804425382 container attach f871da8aa383fdb1a0a4843958e16cbb9b3b1d4f929381e66492177273e4e7a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_ride, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 29 06:57:55 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1364: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:57:55 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:57:55 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:57:55 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:57:55.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:57:55 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:57:55 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:57:55 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:57:55.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:57:55 compute-0 gallant_ride[269638]: {
Nov 29 06:57:55 compute-0 gallant_ride[269638]:     "91f280f1-e534-4adc-bf70-98711580c2dd": {
Nov 29 06:57:55 compute-0 gallant_ride[269638]:         "ceph_fsid": "336ec58c-893b-528f-a0c1-6ed1196bc047",
Nov 29 06:57:55 compute-0 gallant_ride[269638]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 06:57:55 compute-0 gallant_ride[269638]:         "osd_id": 1,
Nov 29 06:57:55 compute-0 gallant_ride[269638]:         "osd_uuid": "91f280f1-e534-4adc-bf70-98711580c2dd",
Nov 29 06:57:55 compute-0 gallant_ride[269638]:         "type": "bluestore"
Nov 29 06:57:55 compute-0 gallant_ride[269638]:     }
Nov 29 06:57:55 compute-0 gallant_ride[269638]: }
Nov 29 06:57:55 compute-0 systemd[1]: libpod-f871da8aa383fdb1a0a4843958e16cbb9b3b1d4f929381e66492177273e4e7a7.scope: Deactivated successfully.
Nov 29 06:57:55 compute-0 podman[269621]: 2025-11-29 06:57:55.905040292 +0000 UTC m=+1.684297918 container died f871da8aa383fdb1a0a4843958e16cbb9b3b1d4f929381e66492177273e4e7a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_ride, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 29 06:57:55 compute-0 sshd-session[269644]: Invalid user info from 103.143.238.173 port 44284
Nov 29 06:57:55 compute-0 sshd-session[269644]: Received disconnect from 103.143.238.173 port 44284:11: Bye Bye [preauth]
Nov 29 06:57:55 compute-0 sshd-session[269644]: Disconnected from invalid user info 103.143.238.173 port 44284 [preauth]
Nov 29 06:57:56 compute-0 ceph-mon[74654]: pgmap v1364: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:57:56 compute-0 sshd-session[269660]: Invalid user janice from 162.214.92.14 port 41352
Nov 29 06:57:56 compute-0 sshd-session[269660]: Received disconnect from 162.214.92.14 port 41352:11: Bye Bye [preauth]
Nov 29 06:57:56 compute-0 sshd-session[269660]: Disconnected from invalid user janice 162.214.92.14 port 41352 [preauth]
Nov 29 06:57:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-cf175aff26530b0afc9a48654857fb4c1438458a861184a74d9f74234298d7fe-merged.mount: Deactivated successfully.
Nov 29 06:57:56 compute-0 podman[269621]: 2025-11-29 06:57:56.537248143 +0000 UTC m=+2.316505769 container remove f871da8aa383fdb1a0a4843958e16cbb9b3b1d4f929381e66492177273e4e7a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_ride, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 06:57:56 compute-0 systemd[1]: libpod-conmon-f871da8aa383fdb1a0a4843958e16cbb9b3b1d4f929381e66492177273e4e7a7.scope: Deactivated successfully.
Nov 29 06:57:56 compute-0 sudo[269489]: pam_unix(sudo:session): session closed for user root
Nov 29 06:57:56 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 06:57:56 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:57:56 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 06:57:56 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:57:56 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev 4d60daf7-9d2e-4d39-9e0a-154dc064fd36 does not exist
Nov 29 06:57:56 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev 50130242-358e-4b71-b19e-6498f0ba4704 does not exist
Nov 29 06:57:56 compute-0 ceph-mgr[74948]: [progress WARNING root] complete: ev 28d220f8-d8ef-44a3-aa9a-60912dcde1d1 does not exist
Nov 29 06:57:56 compute-0 sudo[269677]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:57:56 compute-0 sudo[269677]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:57:56 compute-0 sudo[269677]: pam_unix(sudo:session): session closed for user root
Nov 29 06:57:56 compute-0 sudo[269702]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 06:57:56 compute-0 sudo[269702]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:57:56 compute-0 sudo[269702]: pam_unix(sudo:session): session closed for user root
Nov 29 06:57:57 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1365: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:57:57 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:57:57 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:57:57 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:57:57.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:57:57 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:57:57 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:57:57 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:57:57.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:57:57 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:57:57 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:57:57 compute-0 ceph-mon[74654]: pgmap v1365: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:57:58 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:57:59 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1366: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:57:59 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:57:59 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:57:59 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:57:59.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:57:59 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:57:59 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:57:59 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:57:59.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:57:59 compute-0 sudo[269732]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:57:59 compute-0 sudo[269732]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:57:59 compute-0 sudo[269732]: pam_unix(sudo:session): session closed for user root
Nov 29 06:57:59 compute-0 sudo[269758]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:57:59 compute-0 sudo[269758]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:57:59 compute-0 sudo[269758]: pam_unix(sudo:session): session closed for user root
Nov 29 06:58:00 compute-0 sshd-session[269731]: Invalid user demo from 197.13.24.157 port 43090
Nov 29 06:58:00 compute-0 sshd-session[269731]: Received disconnect from 197.13.24.157 port 43090:11: Bye Bye [preauth]
Nov 29 06:58:00 compute-0 sshd-session[269731]: Disconnected from invalid user demo 197.13.24.157 port 43090 [preauth]
Nov 29 06:58:01 compute-0 sshd-session[269729]: Received disconnect from 27.112.78.245 port 50010:11: Bye Bye [preauth]
Nov 29 06:58:01 compute-0 sshd-session[269729]: Disconnected from authenticating user root 27.112.78.245 port 50010 [preauth]
Nov 29 06:58:01 compute-0 ceph-mon[74654]: pgmap v1366: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:58:01 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1367: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:58:01 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:58:01 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 06:58:01 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:58:01.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 06:58:01 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:58:01 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:58:01 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:58:01.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:58:02 compute-0 podman[269784]: 2025-11-29 06:58:02.088681508 +0000 UTC m=+0.050711188 container health_status 81ea2bcb89266a0110a379c2083d8cc042460d4a35c8ed3bf349dd1083925000 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 29 06:58:02 compute-0 podman[269785]: 2025-11-29 06:58:02.118160616 +0000 UTC m=+0.080221557 container health_status b3f42e9a710907b47913576d27471d163da731262c1464357cff24681ce600c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 06:58:02 compute-0 ceph-mon[74654]: pgmap v1367: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:58:03 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:58:03 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1368: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:58:03 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:58:03 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:58:03 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:58:03.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:58:03 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:58:03 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:58:03 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:58:03.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:58:03 compute-0 ceph-mon[74654]: pgmap v1368: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:58:04 compute-0 sshd-session[269827]: Invalid user cumulus from 103.63.25.115 port 34780
Nov 29 06:58:04 compute-0 sshd-session[269827]: Received disconnect from 103.63.25.115 port 34780:11: Bye Bye [preauth]
Nov 29 06:58:04 compute-0 sshd-session[269827]: Disconnected from invalid user cumulus 103.63.25.115 port 34780 [preauth]
Nov 29 06:58:05 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1369: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:58:05 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:58:05 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:58:05 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:58:05.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:58:05 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:58:05 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:58:05 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:58:05.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:58:07 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1370: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:58:07 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:58:07 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:58:07 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:58:07.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:58:07 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:58:07 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 06:58:07 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:58:07.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 06:58:08 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:58:09 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1371: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:58:09 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:58:09 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:58:09 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:58:09.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:58:09 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:58:09 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:58:09 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:58:09.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:58:10 compute-0 ceph-mon[74654]: pgmap v1369: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:58:11 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1372: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:58:11 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:58:11 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:58:11 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:58:11.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:58:11 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:58:11 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:58:11 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:58:11.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:58:12 compute-0 ceph-mon[74654]: pgmap v1370: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:58:12 compute-0 ceph-mon[74654]: pgmap v1371: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:58:12 compute-0 ceph-mon[74654]: pgmap v1372: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:58:13 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1373: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:58:13 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:58:13 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:58:13 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:58:13.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:58:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 06:58:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:58:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 06:58:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:58:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:58:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:58:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:58:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:58:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:58:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:58:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:58:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:58:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 06:58:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:58:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:58:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:58:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Nov 29 06:58:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:58:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 06:58:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:58:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 06:58:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 06:58:13 compute-0 ceph-mgr[74948]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 06:58:13 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:58:13 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:58:13 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:58:13.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:58:14 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:58:15 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1374: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:58:15 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:58:15 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:58:15 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:58:15.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:58:15 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:58:15 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:58:15 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:58:15.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:58:16 compute-0 ceph-mon[74654]: pgmap v1373: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:58:16 compute-0 ceph-mon[74654]: pgmap v1374: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:58:16 compute-0 sshd-session[269836]: Invalid user oracle from 193.163.72.91 port 36722
Nov 29 06:58:17 compute-0 sshd-session[269836]: Received disconnect from 193.163.72.91 port 36722:11: Bye Bye [preauth]
Nov 29 06:58:17 compute-0 sshd-session[269836]: Disconnected from invalid user oracle 193.163.72.91 port 36722 [preauth]
Nov 29 06:58:17 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1375: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:58:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:58:17.251 157767 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 06:58:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:58:17.253 157767 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 06:58:17 compute-0 ovn_metadata_agent[157760]: 2025-11-29 06:58:17.253 157767 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 06:58:17 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:58:17 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:58:17 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:58:17.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:58:17 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:58:17 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:58:17 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:58:17.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:58:18 compute-0 ceph-mon[74654]: pgmap v1375: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:58:18 compute-0 nova_compute[251877]: 2025-11-29 06:58:18.958 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 06:58:18 compute-0 nova_compute[251877]: 2025-11-29 06:58:18.959 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 06:58:18 compute-0 nova_compute[251877]: 2025-11-29 06:58:18.959 251881 DEBUG nova.compute.manager [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 29 06:58:19 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1376: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:58:19 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:58:19 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:58:19 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:58:19.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:58:19 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:58:19 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:58:19 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:58:19.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:58:19 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:58:20 compute-0 sudo[269840]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:58:20 compute-0 sudo[269840]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:58:20 compute-0 sudo[269840]: pam_unix(sudo:session): session closed for user root
Nov 29 06:58:20 compute-0 nova_compute[251877]: 2025-11-29 06:58:20.047 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 06:58:20 compute-0 sudo[269865]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:58:20 compute-0 sudo[269865]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:58:20 compute-0 sudo[269865]: pam_unix(sudo:session): session closed for user root
Nov 29 06:58:20 compute-0 ceph-mon[74654]: pgmap v1376: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:58:20 compute-0 nova_compute[251877]: 2025-11-29 06:58:20.957 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 06:58:21 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1377: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:58:21 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:58:21 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 06:58:21 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:58:21.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 06:58:21 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:58:21 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:58:21 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:58:21.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:58:22 compute-0 nova_compute[251877]: 2025-11-29 06:58:22.191 251881 DEBUG oslo_concurrency.lockutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 06:58:22 compute-0 nova_compute[251877]: 2025-11-29 06:58:22.191 251881 DEBUG oslo_concurrency.lockutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 06:58:22 compute-0 nova_compute[251877]: 2025-11-29 06:58:22.191 251881 DEBUG oslo_concurrency.lockutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 06:58:22 compute-0 nova_compute[251877]: 2025-11-29 06:58:22.191 251881 DEBUG nova.compute.resource_tracker [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 06:58:22 compute-0 nova_compute[251877]: 2025-11-29 06:58:22.192 251881 DEBUG oslo_concurrency.processutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 06:58:22 compute-0 ceph-mon[74654]: pgmap v1377: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:58:23 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 06:58:23 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/705562816' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 06:58:23 compute-0 nova_compute[251877]: 2025-11-29 06:58:23.194 251881 DEBUG oslo_concurrency.processutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.002s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 06:58:23 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1378: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:58:23 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:58:23 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:58:23 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:58:23.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:58:23 compute-0 nova_compute[251877]: 2025-11-29 06:58:23.376 251881 WARNING nova.virt.libvirt.driver [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 06:58:23 compute-0 nova_compute[251877]: 2025-11-29 06:58:23.378 251881 DEBUG nova.compute.resource_tracker [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5208MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 06:58:23 compute-0 nova_compute[251877]: 2025-11-29 06:58:23.378 251881 DEBUG oslo_concurrency.lockutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 06:58:23 compute-0 nova_compute[251877]: 2025-11-29 06:58:23.379 251881 DEBUG oslo_concurrency.lockutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 06:58:23 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:58:23 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:58:23 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:58:23.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:58:23 compute-0 nova_compute[251877]: 2025-11-29 06:58:23.761 251881 DEBUG nova.compute.resource_tracker [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 06:58:23 compute-0 nova_compute[251877]: 2025-11-29 06:58:23.761 251881 DEBUG nova.compute.resource_tracker [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 06:58:23 compute-0 nova_compute[251877]: 2025-11-29 06:58:23.791 251881 DEBUG oslo_concurrency.processutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 06:58:24 compute-0 ceph-mon[74654]: from='client.? 192.168.122.100:0/705562816' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 06:58:24 compute-0 ceph-mon[74654]: pgmap v1378: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:58:24 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 06:58:24 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3131628032' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 06:58:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:58:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:58:24 compute-0 nova_compute[251877]: 2025-11-29 06:58:24.310 251881 DEBUG oslo_concurrency.processutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 06:58:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:58:24 compute-0 nova_compute[251877]: 2025-11-29 06:58:24.318 251881 DEBUG nova.compute.provider_tree [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Inventory has not changed in ProviderTree for provider: 36ed0248-8d04-4532-95bb-daab89f12202 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 06:58:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:58:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:58:24 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:58:24 compute-0 nova_compute[251877]: 2025-11-29 06:58:24.457 251881 DEBUG nova.scheduler.client.report [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Inventory has not changed for provider 36ed0248-8d04-4532-95bb-daab89f12202 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 06:58:24 compute-0 nova_compute[251877]: 2025-11-29 06:58:24.459 251881 DEBUG nova.compute.resource_tracker [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 06:58:24 compute-0 nova_compute[251877]: 2025-11-29 06:58:24.459 251881 DEBUG oslo_concurrency.lockutils [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.081s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 06:58:24 compute-0 nova_compute[251877]: 2025-11-29 06:58:24.460 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 06:58:24 compute-0 nova_compute[251877]: 2025-11-29 06:58:24.460 251881 DEBUG nova.compute.manager [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 29 06:58:24 compute-0 nova_compute[251877]: 2025-11-29 06:58:24.692 251881 DEBUG nova.compute.manager [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 29 06:58:24 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:58:25 compute-0 podman[269936]: 2025-11-29 06:58:25.084696263 +0000 UTC m=+0.067037132 container health_status 843911ed0b6203707f0633a7e737420fbf54d55170a2d9cdc86db1752ff76af8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, org.label-schema.license=GPLv2)
Nov 29 06:58:25 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1379: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:58:25 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:58:25 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:58:25 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:58:25.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:58:25 compute-0 ceph-mon[74654]: from='client.? 192.168.122.100:0/3131628032' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 06:58:25 compute-0 ceph-mon[74654]: from='client.? 192.168.122.102:0/2010625727' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 06:58:25 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:58:25 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:58:25 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:58:25.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:58:25 compute-0 nova_compute[251877]: 2025-11-29 06:58:25.693 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 06:58:25 compute-0 nova_compute[251877]: 2025-11-29 06:58:25.693 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 06:58:25 compute-0 nova_compute[251877]: 2025-11-29 06:58:25.693 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 06:58:25 compute-0 nova_compute[251877]: 2025-11-29 06:58:25.694 251881 DEBUG nova.compute.manager [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 06:58:25 compute-0 nova_compute[251877]: 2025-11-29 06:58:25.959 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 06:58:26 compute-0 ceph-mon[74654]: pgmap v1379: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:58:26 compute-0 ceph-mon[74654]: from='client.? 192.168.122.101:0/2138341124' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 06:58:26 compute-0 nova_compute[251877]: 2025-11-29 06:58:26.958 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 06:58:26 compute-0 nova_compute[251877]: 2025-11-29 06:58:26.958 251881 DEBUG nova.compute.manager [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 06:58:26 compute-0 nova_compute[251877]: 2025-11-29 06:58:26.959 251881 DEBUG nova.compute.manager [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 06:58:26 compute-0 nova_compute[251877]: 2025-11-29 06:58:26.975 251881 DEBUG nova.compute.manager [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 06:58:27 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1380: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:58:27 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:58:27 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:58:27 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:58:27.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:58:27 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:58:27 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:58:27 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:58:27.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:58:27 compute-0 ceph-mon[74654]: from='client.? 192.168.122.102:0/3947950218' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 06:58:27 compute-0 ceph-mon[74654]: from='client.? 192.168.122.101:0/2021454943' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 06:58:27 compute-0 ceph-mon[74654]: pgmap v1380: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:58:27 compute-0 nova_compute[251877]: 2025-11-29 06:58:27.957 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 06:58:29 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1381: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:58:29 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:58:29 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:58:29 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:58:29.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:58:29 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:58:29 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 06:58:29 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:58:29.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 06:58:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 06:58:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 06:58:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 06:58:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 06:58:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 06:58:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 06:58:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 06:58:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 06:58:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 06:58:29 compute-0 ceph-mgr[74948]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 06:58:29 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:58:30 compute-0 ceph-mon[74654]: pgmap v1381: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:58:31 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1382: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:58:31 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:58:31 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:58:31 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:58:31.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:58:31 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:58:31 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:58:31 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:58:31.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:58:32 compute-0 ceph-mon[74654]: pgmap v1382: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:58:32 compute-0 nova_compute[251877]: 2025-11-29 06:58:32.959 251881 DEBUG oslo_service.periodic_task [None req-92a94735-82b3-4641-bcfb-ff27aa048239 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 06:58:33 compute-0 podman[269960]: 2025-11-29 06:58:33.14046711 +0000 UTC m=+0.090317028 container health_status 81ea2bcb89266a0110a379c2083d8cc042460d4a35c8ed3bf349dd1083925000 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 29 06:58:33 compute-0 podman[269961]: 2025-11-29 06:58:33.189665936 +0000 UTC m=+0.134693351 container health_status b3f42e9a710907b47913576d27471d163da731262c1464357cff24681ce600c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 06:58:33 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1383: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:58:33 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:58:33 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:58:33 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:58:33.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:58:33 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:58:33 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 06:58:33 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:58:33.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 06:58:33 compute-0 ceph-mon[74654]: pgmap v1383: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:58:34 compute-0 sshd-session[270003]: Accepted publickey for zuul from 192.168.122.10 port 54498 ssh2: ECDSA SHA256:q0RMlXdalxA6snNWza7TmIndlwLWLLpO+sXhiGKqO/I
Nov 29 06:58:34 compute-0 systemd-logind[797]: New session 52 of user zuul.
Nov 29 06:58:34 compute-0 systemd[1]: Started Session 52 of User zuul.
Nov 29 06:58:34 compute-0 sshd-session[270003]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 06:58:34 compute-0 sudo[270007]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp  -p container,openstack_edpm,system,storage,virt'
Nov 29 06:58:34 compute-0 sudo[270007]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 06:58:34 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:58:35 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1384: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:58:35 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:58:35 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:58:35 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:58:35.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:58:35 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:58:35 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:58:35 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:58:35.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:58:35 compute-0 ceph-osd[85162]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 06:58:35 compute-0 ceph-osd[85162]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.8 total, 600.0 interval
                                           Cumulative writes: 9755 writes, 36K keys, 9755 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 9755 writes, 2327 syncs, 4.19 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 561 writes, 878 keys, 561 commit groups, 1.0 writes per commit group, ingest: 0.27 MB, 0.00 MB/s
                                           Interval WAL: 561 writes, 253 syncs, 2.22 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 06:58:36 compute-0 sshd-session[270041]: Invalid user ansadmin from 118.193.39.127 port 40844
Nov 29 06:58:36 compute-0 sshd-session[270041]: Received disconnect from 118.193.39.127 port 40844:11: Bye Bye [preauth]
Nov 29 06:58:36 compute-0 sshd-session[270041]: Disconnected from invalid user ansadmin 118.193.39.127 port 40844 [preauth]
Nov 29 06:58:37 compute-0 ceph-mon[74654]: pgmap v1384: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:58:37 compute-0 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.24755 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 06:58:37 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1385: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:58:37 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:58:37 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:58:37 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:58:37.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:58:37 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:58:37 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:58:37 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:58:37.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:58:37 compute-0 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.24761 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 06:58:37 compute-0 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.14961 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 06:58:38 compute-0 ceph-mon[74654]: from='client.? 192.168.122.10:0/1396494908' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 06:58:38 compute-0 ceph-mon[74654]: from='client.? 192.168.122.10:0/1396494908' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 06:58:38 compute-0 ceph-mon[74654]: from='client.24755 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 06:58:38 compute-0 ceph-mon[74654]: pgmap v1385: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:58:38 compute-0 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.14967 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 06:58:38 compute-0 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.24820 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 06:58:38 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status"} v 0) v1
Nov 29 06:58:38 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1977521267' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 29 06:58:39 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1386: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:58:39 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:58:39 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 06:58:39 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:58:39.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 06:58:39 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:58:39 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:58:39 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:58:39.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:58:39 compute-0 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.24826 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 06:58:39 compute-0 ceph-mon[74654]: from='client.24761 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 06:58:39 compute-0 ceph-mon[74654]: from='client.14961 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 06:58:39 compute-0 ceph-mon[74654]: from='client.? 192.168.122.101:0/1573460643' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 29 06:58:39 compute-0 ceph-mon[74654]: from='client.? 192.168.122.100:0/1977521267' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 29 06:58:40 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:58:40 compute-0 sudo[270274]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:58:40 compute-0 sudo[270274]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:58:40 compute-0 sudo[270274]: pam_unix(sudo:session): session closed for user root
Nov 29 06:58:40 compute-0 sudo[270303]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:58:40 compute-0 sudo[270303]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:58:40 compute-0 sudo[270303]: pam_unix(sudo:session): session closed for user root
Nov 29 06:58:41 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1387: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:58:41 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:58:41 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:58:41 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:58:41.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:58:41 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:58:41 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:58:41 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:58:41.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:58:42 compute-0 ceph-mon[74654]: from='client.14967 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 06:58:42 compute-0 ceph-mon[74654]: from='client.24820 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 06:58:42 compute-0 ceph-mon[74654]: pgmap v1386: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:58:42 compute-0 ceph-mon[74654]: from='client.24826 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 06:58:42 compute-0 ceph-mon[74654]: from='client.? 192.168.122.102:0/857796520' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 29 06:58:43 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1388: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:58:43 compute-0 ceph-mon[74654]: pgmap v1387: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:58:43 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:58:43 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:58:43 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:58:43.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:58:43 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:58:43 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:58:43 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:58:43.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:58:44 compute-0 ovs-vsctl[270391]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Nov 29 06:58:44 compute-0 ceph-mon[74654]: pgmap v1388: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:58:45 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:58:45 compute-0 sshd-session[270340]: Invalid user user from 45.78.221.93 port 44900
Nov 29 06:58:45 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1389: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:58:45 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:58:45 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:58:45 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:58:45.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:58:45 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:58:45 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:58:45 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:58:45.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:58:45 compute-0 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.24776 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 06:58:45 compute-0 virtqemud[251417]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Nov 29 06:58:45 compute-0 virtqemud[251417]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Nov 29 06:58:45 compute-0 virtqemud[251417]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Nov 29 06:58:45 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0) v1
Nov 29 06:58:45 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 29 06:58:46 compute-0 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.24835 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 06:58:46 compute-0 ceph-mon[74654]: pgmap v1389: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:58:46 compute-0 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.24841 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 06:58:46 compute-0 ceph-mds[94810]: mds.cephfs.compute-0.jzycnf asok_command: cache status {prefix=cache status} (starting...)
Nov 29 06:58:46 compute-0 ceph-mds[94810]: mds.cephfs.compute-0.jzycnf Can't run that command on an inactive MDS!
Nov 29 06:58:46 compute-0 ceph-mds[94810]: mds.cephfs.compute-0.jzycnf asok_command: client ls {prefix=client ls} (starting...)
Nov 29 06:58:46 compute-0 ceph-mds[94810]: mds.cephfs.compute-0.jzycnf Can't run that command on an inactive MDS!
Nov 29 06:58:46 compute-0 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.24853 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 06:58:46 compute-0 lvm[270751]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 29 06:58:46 compute-0 lvm[270751]: VG ceph_vg0 finished
Nov 29 06:58:46 compute-0 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.24803 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 06:58:46 compute-0 ceph-mgr[74948]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 29 06:58:46 compute-0 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: 2025-11-29T06:58:46.951+0000 7f90f1cf5640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 29 06:58:46 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0) v1
Nov 29 06:58:46 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 29 06:58:47 compute-0 ceph-mds[94810]: mds.cephfs.compute-0.jzycnf asok_command: damage ls {prefix=damage ls} (starting...)
Nov 29 06:58:47 compute-0 ceph-mds[94810]: mds.cephfs.compute-0.jzycnf Can't run that command on an inactive MDS!
Nov 29 06:58:47 compute-0 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.14982 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 06:58:47 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1390: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:58:47 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:58:47 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:58:47 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:58:47.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:58:47 compute-0 ceph-mds[94810]: mds.cephfs.compute-0.jzycnf asok_command: dump loads {prefix=dump loads} (starting...)
Nov 29 06:58:47 compute-0 ceph-mds[94810]: mds.cephfs.compute-0.jzycnf Can't run that command on an inactive MDS!
Nov 29 06:58:47 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:58:47 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:58:47 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:58:47.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:58:47 compute-0 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.24880 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 06:58:47 compute-0 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: 2025-11-29T06:58:47.453+0000 7f90f1cf5640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 29 06:58:47 compute-0 ceph-mgr[74948]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 29 06:58:47 compute-0 ceph-mds[94810]: mds.cephfs.compute-0.jzycnf asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Nov 29 06:58:47 compute-0 ceph-mds[94810]: mds.cephfs.compute-0.jzycnf Can't run that command on an inactive MDS!
Nov 29 06:58:47 compute-0 ceph-mds[94810]: mds.cephfs.compute-0.jzycnf asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Nov 29 06:58:47 compute-0 ceph-mds[94810]: mds.cephfs.compute-0.jzycnf Can't run that command on an inactive MDS!
Nov 29 06:58:47 compute-0 ceph-mds[94810]: mds.cephfs.compute-0.jzycnf asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Nov 29 06:58:47 compute-0 ceph-mds[94810]: mds.cephfs.compute-0.jzycnf Can't run that command on an inactive MDS!
Nov 29 06:58:47 compute-0 ceph-mds[94810]: mds.cephfs.compute-0.jzycnf asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Nov 29 06:58:47 compute-0 ceph-mds[94810]: mds.cephfs.compute-0.jzycnf Can't run that command on an inactive MDS!
Nov 29 06:58:48 compute-0 ceph-mds[94810]: mds.cephfs.compute-0.jzycnf asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Nov 29 06:58:48 compute-0 ceph-mds[94810]: mds.cephfs.compute-0.jzycnf Can't run that command on an inactive MDS!
Nov 29 06:58:48 compute-0 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.24836 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 06:58:48 compute-0 ceph-mds[94810]: mds.cephfs.compute-0.jzycnf asok_command: get subtrees {prefix=get subtrees} (starting...)
Nov 29 06:58:48 compute-0 ceph-mds[94810]: mds.cephfs.compute-0.jzycnf Can't run that command on an inactive MDS!
Nov 29 06:58:48 compute-0 ceph-mon[74654]: from='client.24776 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 06:58:48 compute-0 ceph-mon[74654]: from='client.? 192.168.122.101:0/1276981983' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 29 06:58:48 compute-0 ceph-mon[74654]: from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 29 06:58:48 compute-0 ceph-mon[74654]: from='client.24835 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 06:58:48 compute-0 ceph-mon[74654]: from='client.24841 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 06:58:48 compute-0 ceph-mon[74654]: from='client.? 192.168.122.101:0/135313544' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:58:48 compute-0 ceph-mon[74654]: from='client.? 192.168.122.102:0/2741286316' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 29 06:58:48 compute-0 ceph-mon[74654]: from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 29 06:58:48 compute-0 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.14994 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 06:58:48 compute-0 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.24910 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 06:58:48 compute-0 ceph-mds[94810]: mds.cephfs.compute-0.jzycnf asok_command: ops {prefix=ops} (starting...)
Nov 29 06:58:48 compute-0 ceph-mds[94810]: mds.cephfs.compute-0.jzycnf Can't run that command on an inactive MDS!
Nov 29 06:58:48 compute-0 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.24848 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 06:58:48 compute-0 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.24916 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 06:58:49 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1391: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:58:49 compute-0 ceph-mds[94810]: mds.cephfs.compute-0.jzycnf asok_command: session ls {prefix=session ls} (starting...)
Nov 29 06:58:49 compute-0 ceph-mds[94810]: mds.cephfs.compute-0.jzycnf Can't run that command on an inactive MDS!
Nov 29 06:58:49 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0) v1
Nov 29 06:58:49 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 29 06:58:49 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:58:49 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:58:49 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:58:49.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:58:49 compute-0 ceph-mds[94810]: mds.cephfs.compute-0.jzycnf asok_command: status {prefix=status} (starting...)
Nov 29 06:58:49 compute-0 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.15018 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 06:58:49 compute-0 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: 2025-11-29T06:58:49.385+0000 7f90f1cf5640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 29 06:58:49 compute-0 ceph-mgr[74948]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 29 06:58:49 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0) v1
Nov 29 06:58:49 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 29 06:58:49 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0) v1
Nov 29 06:58:49 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 29 06:58:49 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:58:49 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:58:49 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:58:49.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:58:49 compute-0 ceph-mon[74654]: from='client.24853 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 06:58:49 compute-0 ceph-mon[74654]: from='client.24803 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 06:58:49 compute-0 ceph-mon[74654]: from='client.14982 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 06:58:49 compute-0 ceph-mon[74654]: pgmap v1390: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:58:49 compute-0 ceph-mon[74654]: from='client.? 192.168.122.101:0/1029441042' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Nov 29 06:58:49 compute-0 ceph-mon[74654]: from='client.? 192.168.122.102:0/1529673451' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:58:49 compute-0 ceph-mon[74654]: from='client.? 192.168.122.101:0/3853464868' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Nov 29 06:58:49 compute-0 ceph-mon[74654]: from='client.24880 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 06:58:49 compute-0 ceph-mon[74654]: from='client.? 192.168.122.102:0/4200116878' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Nov 29 06:58:49 compute-0 ceph-mon[74654]: from='client.? 192.168.122.101:0/1898843982' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Nov 29 06:58:49 compute-0 ceph-mon[74654]: from='client.? 192.168.122.101:0/961495441' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 29 06:58:49 compute-0 ceph-mon[74654]: from='client.? 192.168.122.102:0/3706317250' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Nov 29 06:58:49 compute-0 ceph-mon[74654]: from='client.? 192.168.122.102:0/352369704' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Nov 29 06:58:49 compute-0 ceph-mon[74654]: from='client.24836 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 06:58:49 compute-0 ceph-mon[74654]: from='client.? 192.168.122.101:0/2721476524' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 29 06:58:49 compute-0 ceph-mon[74654]: from='client.? 192.168.122.102:0/446556486' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 29 06:58:49 compute-0 ceph-mon[74654]: from='client.14994 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 06:58:49 compute-0 ceph-mon[74654]: from='client.24910 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 06:58:49 compute-0 ceph-mon[74654]: from='client.24848 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 06:58:49 compute-0 ceph-mon[74654]: from='client.24916 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 06:58:49 compute-0 ceph-mon[74654]: pgmap v1391: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:58:49 compute-0 ceph-mon[74654]: from='client.? 192.168.122.102:0/2498518273' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 29 06:58:49 compute-0 ceph-mon[74654]: from='client.? 192.168.122.100:0/684679155' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 29 06:58:49 compute-0 ceph-mon[74654]: from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 29 06:58:49 compute-0 ceph-mon[74654]: from='client.? 192.168.122.101:0/3910457812' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 29 06:58:49 compute-0 ceph-mon[74654]: from='client.? 192.168.122.102:0/2393270283' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 29 06:58:49 compute-0 ceph-mon[74654]: from='client.15018 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 06:58:49 compute-0 ceph-mon[74654]: from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 29 06:58:49 compute-0 ceph-mon[74654]: from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 29 06:58:49 compute-0 ceph-mon[74654]: from='client.? 192.168.122.101:0/1704439386' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 29 06:58:49 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 06:58:49 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3758656299' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:58:49 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0) v1
Nov 29 06:58:49 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/844408131' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Nov 29 06:58:50 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:58:50 compute-0 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.24970 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 06:58:50 compute-0 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: 2025-11-29T06:58:50.237+0000 7f90f1cf5640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 29 06:58:50 compute-0 ceph-mgr[74948]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 29 06:58:50 compute-0 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.24893 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 06:58:50 compute-0 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: 2025-11-29T06:58:50.247+0000 7f90f1cf5640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 29 06:58:50 compute-0 ceph-mgr[74948]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 29 06:58:50 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Nov 29 06:58:50 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1662226395' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 29 06:58:50 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config log"} v 0) v1
Nov 29 06:58:50 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4145305371' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Nov 29 06:58:50 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Nov 29 06:58:50 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3003957364' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 29 06:58:50 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config-key dump"} v 0) v1
Nov 29 06:58:50 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1120183002' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Nov 29 06:58:51 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1392: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:58:51 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:58:51 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:58:51 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:58:51.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:58:51 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:58:51 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:58:51 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:58:51.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890585 data_alloc: 218103808 data_used: 282624
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81723392 unmapped: 6381568 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:24:57.915156+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 133 handle_osd_map epochs [133,134], i have 133, src has [1,134]
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 133 handle_osd_map epochs [133,134], i have 134, src has [1,134]
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 134 pg[9.1e( v 56'1130 (0'0,56'1130] local-lis/les=131/132 n=5 ec=58/47 lis/c=131/78 les/c/f=132/79/0 sis=133) [1] r=0 lpr=133 pi=[78,133)/1 crt=56'1130 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.557994 2 0.000098
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 134 pg[9.1e( v 56'1130 (0'0,56'1130] local-lis/les=131/132 n=5 ec=58/47 lis/c=131/78 les/c/f=132/79/0 sis=133) [1] r=0 lpr=133 pi=[78,133)/1 crt=56'1130 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.559969 0 0.000000
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 134 pg[9.1e( v 56'1130 (0'0,56'1130] local-lis/les=131/132 n=5 ec=58/47 lis/c=131/78 les/c/f=132/79/0 sis=133) [1] r=0 lpr=133 pi=[78,133)/1 crt=56'1130 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 134 pg[9.1e( v 56'1130 (0'0,56'1130] local-lis/les=133/134 n=5 ec=58/47 lis/c=131/78 les/c/f=132/79/0 sis=133) [1] r=0 lpr=133 pi=[78,133)/1 crt=56'1130 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81731584 unmapped: 6373376 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:24:58.915307+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81731584 unmapped: 6373376 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 134 heartbeat osd_stat(store_statfs(0x1bca8a000/0x0/0x1bfc00000, data 0xcd772/0x192000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:24:59.915504+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81731584 unmapped: 6373376 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:25:00.915696+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 134 heartbeat osd_stat(store_statfs(0x1bca8a000/0x0/0x1bfc00000, data 0xcd772/0x192000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81739776 unmapped: 6365184 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:25:01.915838+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 134 pg[9.1e( v 56'1130 (0'0,56'1130] local-lis/les=133/134 n=5 ec=58/47 lis/c=131/78 les/c/f=132/79/0 sis=133) [1] r=0 lpr=133 pi=[78,133)/1 crt=56'1130 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 134 pg[9.1e( v 56'1130 (0'0,56'1130] local-lis/les=133/134 n=5 ec=58/47 lis/c=133/78 les/c/f=134/79/0 sis=133) [1] r=0 lpr=133 pi=[78,133)/1 crt=56'1130 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 3.733084 4 0.000496
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 134 pg[9.1e( v 56'1130 (0'0,56'1130] local-lis/les=133/134 n=5 ec=58/47 lis/c=133/78 les/c/f=134/79/0 sis=133) [1] r=0 lpr=133 pi=[78,133)/1 crt=56'1130 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 134 pg[9.1e( v 56'1130 (0'0,56'1130] local-lis/les=133/134 n=5 ec=58/47 lis/c=133/78 les/c/f=134/79/0 sis=133) [1] r=0 lpr=133 pi=[78,133)/1 crt=56'1130 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000038 0 0.000000
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 134 pg[9.1e( v 56'1130 (0'0,56'1130] local-lis/les=133/134 n=5 ec=58/47 lis/c=133/78 les/c/f=134/79/0 sis=133) [1] r=0 lpr=133 pi=[78,133)/1 crt=56'1130 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 892679 data_alloc: 218103808 data_used: 282624
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81739776 unmapped: 6365184 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:25:02.916003+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81739776 unmapped: 6365184 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:25:03.916159+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81747968 unmapped: 6356992 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 134 heartbeat osd_stat(store_statfs(0x1bca8c000/0x0/0x1bfc00000, data 0xcd772/0x192000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:25:04.916296+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81747968 unmapped: 6356992 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:25:05.916419+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81756160 unmapped: 6348800 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:25:06.916594+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 892679 data_alloc: 218103808 data_used: 282624
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81756160 unmapped: 6348800 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:25:07.916737+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 134 heartbeat osd_stat(store_statfs(0x1bca8c000/0x0/0x1bfc00000, data 0xcd772/0x192000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81756160 unmapped: 6348800 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:25:08.916953+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81764352 unmapped: 6340608 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:25:09.917157+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81764352 unmapped: 6340608 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 134 handle_osd_map epochs [134,135], i have 134, src has [1,135]
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.667863846s of 13.856606483s, submitted: 12
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:25:10.917347+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 135 pg[9.1f(unlocked)] enter Initial
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 135 pg[9.1f( empty local-lis/les=0/0 n=0 ec=58/47 lis/c=98/98 les/c/f=99/99/0 sis=135) [1] r=0 lpr=0 pi=[98,135)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000085 0 0.000000
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 135 pg[9.1f( empty local-lis/les=0/0 n=0 ec=58/47 lis/c=98/98 les/c/f=99/99/0 sis=135) [1] r=0 lpr=0 pi=[98,135)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 135 pg[9.1f( empty local-lis/les=0/0 n=0 ec=58/47 lis/c=98/98 les/c/f=99/99/0 sis=135) [1] r=0 lpr=135 pi=[98,135)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000031 1 0.000044
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 135 pg[9.1f( empty local-lis/les=0/0 n=0 ec=58/47 lis/c=98/98 les/c/f=99/99/0 sis=135) [1] r=0 lpr=135 pi=[98,135)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 135 pg[9.1f( empty local-lis/les=0/0 n=0 ec=58/47 lis/c=98/98 les/c/f=99/99/0 sis=135) [1] r=0 lpr=135 pi=[98,135)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 135 pg[9.1f( empty local-lis/les=0/0 n=0 ec=58/47 lis/c=98/98 les/c/f=99/99/0 sis=135) [1] r=0 lpr=135 pi=[98,135)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 135 pg[9.1f( empty local-lis/les=0/0 n=0 ec=58/47 lis/c=98/98 les/c/f=99/99/0 sis=135) [1] r=0 lpr=135 pi=[98,135)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000008 0 0.000000
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 135 pg[9.1f( empty local-lis/les=0/0 n=0 ec=58/47 lis/c=98/98 les/c/f=99/99/0 sis=135) [1] r=0 lpr=135 pi=[98,135)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 135 pg[9.1f( empty local-lis/les=0/0 n=0 ec=58/47 lis/c=98/98 les/c/f=99/99/0 sis=135) [1] r=0 lpr=135 pi=[98,135)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 135 pg[9.1f( empty local-lis/les=0/0 n=0 ec=58/47 lis/c=98/98 les/c/f=99/99/0 sis=135) [1] r=0 lpr=135 pi=[98,135)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 135 handle_osd_map epochs [135,135], i have 135, src has [1,135]
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 135 pg[9.1f( empty local-lis/les=0/0 n=0 ec=58/47 lis/c=98/98 les/c/f=99/99/0 sis=135) [1] r=0 lpr=135 pi=[98,135)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000301 1 0.000092
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 135 pg[9.1f( empty local-lis/les=0/0 n=0 ec=58/47 lis/c=98/98 les/c/f=99/99/0 sis=135) [1] r=0 lpr=135 pi=[98,135)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 135 pg[9.1f( empty local-lis/les=0/0 n=0 ec=58/47 lis/c=98/98 les/c/f=99/99/0 sis=135) [1] r=0 lpr=135 pi=[98,135)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000046 0 0.000000
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 135 pg[9.1f( empty local-lis/les=0/0 n=0 ec=58/47 lis/c=98/98 les/c/f=99/99/0 sis=135) [1] r=0 lpr=135 pi=[98,135)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000363 0 0.000000
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 135 pg[9.1f( empty local-lis/les=0/0 n=0 ec=58/47 lis/c=98/98 les/c/f=99/99/0 sis=135) [1] r=0 lpr=135 pi=[98,135)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81797120 unmapped: 6307840 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:25:11.917512+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 898207 data_alloc: 218103808 data_used: 290816
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81805312 unmapped: 6299648 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:25:12.917645+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81821696 unmapped: 6283264 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:25:13.917761+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 135 heartbeat osd_stat(store_statfs(0x1bca88000/0x0/0x1bfc00000, data 0xcf3cb/0x195000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81821696 unmapped: 6283264 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 135 heartbeat osd_stat(store_statfs(0x1bca88000/0x0/0x1bfc00000, data 0xcf3cb/0x195000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:25:14.917927+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81829888 unmapped: 6275072 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:25:15.918103+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 6266880 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 135 handle_osd_map epochs [135,136], i have 135, src has [1,136]
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 135 handle_osd_map epochs [135,136], i have 136, src has [1,136]
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 135 heartbeat osd_stat(store_statfs(0x1bca88000/0x0/0x1bfc00000, data 0xcf3cb/0x195000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 136 pg[9.1f( empty local-lis/les=0/0 n=0 ec=58/47 lis/c=98/98 les/c/f=99/99/0 sis=135) [1] r=0 lpr=135 pi=[98,135)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 5.839487 2 0.000169
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 136 pg[9.1f( empty local-lis/les=0/0 n=0 ec=58/47 lis/c=98/98 les/c/f=99/99/0 sis=135) [1] r=0 lpr=135 pi=[98,135)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 5.840039 0 0.000000
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 136 pg[9.1f( empty local-lis/les=0/0 n=0 ec=58/47 lis/c=98/98 les/c/f=99/99/0 sis=135) [1] r=0 lpr=135 pi=[98,135)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 5.840112 0 0.000000
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 136 pg[9.1f( empty local-lis/les=0/0 n=0 ec=58/47 lis/c=98/98 les/c/f=99/99/0 sis=135) [1] r=0 lpr=135 pi=[98,135)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 136 pg[9.1f( empty local-lis/les=0/0 n=0 ec=58/47 lis/c=98/98 les/c/f=99/99/0 sis=136) [1]/[0] r=-1 lpr=136 pi=[98,136)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 136 pg[9.1f( empty local-lis/les=0/0 n=0 ec=58/47 lis/c=98/98 les/c/f=99/99/0 sis=136) [1]/[0] r=-1 lpr=136 pi=[98,136)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000406 1 0.000660
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 136 pg[9.1f( empty local-lis/les=0/0 n=0 ec=58/47 lis/c=98/98 les/c/f=99/99/0 sis=136) [1]/[0] r=-1 lpr=136 pi=[98,136)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 136 pg[9.1f( empty local-lis/les=0/0 n=0 ec=58/47 lis/c=98/98 les/c/f=99/99/0 sis=136) [1]/[0] r=-1 lpr=136 pi=[98,136)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 136 pg[9.1f( empty local-lis/les=0/0 n=0 ec=58/47 lis/c=98/98 les/c/f=99/99/0 sis=136) [1]/[0] r=-1 lpr=136 pi=[98,136)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 136 pg[9.1f( empty local-lis/les=0/0 n=0 ec=58/47 lis/c=98/98 les/c/f=99/99/0 sis=136) [1]/[0] r=-1 lpr=136 pi=[98,136)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000056 0 0.000000
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 136 pg[9.1f( empty local-lis/les=0/0 n=0 ec=58/47 lis/c=98/98 les/c/f=99/99/0 sis=136) [1]/[0] r=-1 lpr=136 pi=[98,136)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: unregistering
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:25:16.918265+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 901853 data_alloc: 218103808 data_used: 290816
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 6242304 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:25:17.918381+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 6242304 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:25:18.918541+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 6234112 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:25:19.918742+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 6234112 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:25:20.918942+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 5.365139961s of 10.466350555s, submitted: 5
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 136 heartbeat osd_stat(store_statfs(0x1bca84000/0x0/0x1bfc00000, data 0xd105e/0x198000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 6234112 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 136 handle_osd_map epochs [136,137], i have 136, src has [1,137]
Nov 29 06:58:52 compute-0 ceph-osd[85162]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 29 06:58:52 compute-0 ceph-osd[85162]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 137 pg[9.1f( v 56'1130 lc 0'0 (0'0,56'1130] local-lis/les=0/0 n=5 ec=58/47 lis/c=98/98 les/c/f=99/99/0 sis=136) [1]/[0] r=-1 lpr=136 pi=[98,136)/1 crt=56'1130 mlcod 0'0 remapped NOTIFY m=5 mbc={}] exit Started/Stray 4.826138 5 0.000216
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 137 pg[9.1f( v 56'1130 lc 0'0 (0'0,56'1130] local-lis/les=0/0 n=5 ec=58/47 lis/c=98/98 les/c/f=99/99/0 sis=136) [1]/[0] r=-1 lpr=136 pi=[98,136)/1 crt=56'1130 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 137 pg[9.1f( v 56'1130 lc 0'0 (0'0,56'1130] local-lis/les=0/0 n=5 ec=58/47 lis/c=98/98 les/c/f=99/99/0 sis=136) [1]/[0] r=-1 lpr=136 pi=[98,136)/1 crt=56'1130 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:25:21.919089+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 904155 data_alloc: 218103808 data_used: 290816
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81772544 unmapped: 6332416 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:25:22.919320+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 scrub-queue::remove_from_osd_queue removing pg[9.1f] failed. State was: not registered w/ OSD
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 137 pg[9.1f( v 56'1130 lc 54'521 (0'0,56'1130] local-lis/les=0/0 n=5 ec=58/47 lis/c=136/98 les/c/f=137/99/0 sis=136) [1]/[0] r=-1 lpr=136 pi=[98,136)/1 luod=0'0 crt=56'1130 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepNotRecovering 1.481476 4 0.000502
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 137 pg[9.1f( v 56'1130 lc 54'521 (0'0,56'1130] local-lis/les=0/0 n=5 ec=58/47 lis/c=136/98 les/c/f=137/99/0 sis=136) [1]/[0] r=-1 lpr=136 pi=[98,136)/1 luod=0'0 crt=56'1130 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 137 pg[9.1f( v 56'1130 lc 54'521 (0'0,56'1130] local-lis/les=0/0 n=5 ec=58/47 lis/c=136/98 les/c/f=137/99/0 sis=136) [1]/[0] r=-1 lpr=136 pi=[98,136)/1 luod=0'0 crt=56'1130 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000169 1 0.000062
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 137 pg[9.1f( v 56'1130 lc 54'521 (0'0,56'1130] local-lis/les=0/0 n=5 ec=58/47 lis/c=136/98 les/c/f=137/99/0 sis=136) [1]/[0] r=-1 lpr=136 pi=[98,136)/1 luod=0'0 crt=56'1130 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepRecovering
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81772544 unmapped: 6332416 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:25:23.919508+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81780736 unmapped: 6324224 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 137 pg[9.1f( v 56'1130 (0'0,56'1130] local-lis/les=0/0 n=5 ec=58/47 lis/c=136/98 les/c/f=137/99/0 sis=136) [1]/[0] r=-1 lpr=136 pi=[98,136)/1 luod=0'0 crt=56'1130 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 1.536255 1 0.000165
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 137 pg[9.1f( v 56'1130 (0'0,56'1130] local-lis/les=0/0 n=5 ec=58/47 lis/c=136/98 les/c/f=137/99/0 sis=136) [1]/[0] r=-1 lpr=136 pi=[98,136)/1 luod=0'0 crt=56'1130 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:25:24.919734+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81780736 unmapped: 6324224 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 137 heartbeat osd_stat(store_statfs(0x1bca82000/0x0/0x1bfc00000, data 0xd2bf4/0x19c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:25:25.919949+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81780736 unmapped: 6324224 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:25:26.920112+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 910988 data_alloc: 218103808 data_used: 290816
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81788928 unmapped: 6316032 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:25:27.920586+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81788928 unmapped: 6316032 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:25:28.920976+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81788928 unmapped: 6316032 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 137 handle_osd_map epochs [137,138], i have 137, src has [1,138]
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:25:29.921429+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 137 heartbeat osd_stat(store_statfs(0x1bca82000/0x0/0x1bfc00000, data 0xd2bf4/0x19c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 137 handle_osd_map epochs [138,138], i have 138, src has [1,138]
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81797120 unmapped: 6307840 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _renew_subs
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:25:30.921712+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 137 handle_osd_map epochs [138,138], i have 138, src has [1,138]
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81805312 unmapped: 6299648 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _renew_subs
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:25:31.922033+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 915162 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81805312 unmapped: 6299648 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:25:32.922359+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 5.259155273s of 12.010793686s, submitted: 14
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 138 pg[9.1f( v 56'1130 (0'0,56'1130] local-lis/les=0/0 n=5 ec=58/47 lis/c=136/98 les/c/f=137/99/0 sis=136) [1]/[0] r=-1 lpr=136 pi=[98,136)/1 luod=0'0 crt=56'1130 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 8.198978 1 0.000058
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 138 pg[9.1f( v 56'1130 (0'0,56'1130] local-lis/les=0/0 n=5 ec=58/47 lis/c=136/98 les/c/f=137/99/0 sis=136) [1]/[0] r=-1 lpr=136 pi=[98,136)/1 luod=0'0 crt=56'1130 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 11.217182 0 0.000000
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 138 pg[9.1f( v 56'1130 (0'0,56'1130] local-lis/les=0/0 n=5 ec=58/47 lis/c=136/98 les/c/f=137/99/0 sis=136) [1]/[0] r=-1 lpr=136 pi=[98,136)/1 luod=0'0 crt=56'1130 mlcod 0'0 active+remapped mbc={}] exit Started 16.043512 0 0.000000
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 138 pg[9.1f( v 56'1130 (0'0,56'1130] local-lis/les=0/0 n=5 ec=58/47 lis/c=136/98 les/c/f=137/99/0 sis=136) [1]/[0] r=-1 lpr=136 pi=[98,136)/1 luod=0'0 crt=56'1130 mlcod 0'0 active+remapped mbc={}] enter Reset
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 138 pg[9.1f( v 56'1130 (0'0,56'1130] local-lis/les=0/0 n=5 ec=58/47 lis/c=136/98 les/c/f=137/99/0 sis=138) [1] r=0 lpr=138 pi=[98,138)/1 luod=0'0 crt=56'1130 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 138 pg[9.1f( v 56'1130 (0'0,56'1130] local-lis/les=0/0 n=5 ec=58/47 lis/c=136/98 les/c/f=137/99/0 sis=138) [1] r=0 lpr=138 pi=[98,138)/1 crt=56'1130 mlcod 0'0 unknown mbc={}] exit Reset 0.000278 1 0.000432
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 138 pg[9.1f( v 56'1130 (0'0,56'1130] local-lis/les=0/0 n=5 ec=58/47 lis/c=136/98 les/c/f=137/99/0 sis=138) [1] r=0 lpr=138 pi=[98,138)/1 crt=56'1130 mlcod 0'0 unknown mbc={}] enter Started
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 138 pg[9.1f( v 56'1130 (0'0,56'1130] local-lis/les=0/0 n=5 ec=58/47 lis/c=136/98 les/c/f=137/99/0 sis=138) [1] r=0 lpr=138 pi=[98,138)/1 crt=56'1130 mlcod 0'0 unknown mbc={}] enter Start
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 138 pg[9.1f( v 56'1130 (0'0,56'1130] local-lis/les=0/0 n=5 ec=58/47 lis/c=136/98 les/c/f=137/99/0 sis=138) [1] r=0 lpr=138 pi=[98,138)/1 crt=56'1130 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 138 pg[9.1f( v 56'1130 (0'0,56'1130] local-lis/les=0/0 n=5 ec=58/47 lis/c=136/98 les/c/f=137/99/0 sis=138) [1] r=0 lpr=138 pi=[98,138)/1 crt=56'1130 mlcod 0'0 unknown mbc={}] exit Start 0.000073 0 0.000000
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 138 pg[9.1f( v 56'1130 (0'0,56'1130] local-lis/les=0/0 n=5 ec=58/47 lis/c=136/98 les/c/f=137/99/0 sis=138) [1] r=0 lpr=138 pi=[98,138)/1 crt=56'1130 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 138 pg[9.1f( v 56'1130 (0'0,56'1130] local-lis/les=0/0 n=5 ec=58/47 lis/c=136/98 les/c/f=137/99/0 sis=138) [1] r=0 lpr=138 pi=[98,138)/1 crt=56'1130 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 138 pg[9.1f( v 56'1130 (0'0,56'1130] local-lis/les=0/0 n=5 ec=58/47 lis/c=136/98 les/c/f=137/99/0 sis=138) [1] r=0 lpr=138 pi=[98,138)/1 crt=56'1130 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 138 pg[9.1f( v 56'1130 (0'0,56'1130] local-lis/les=0/0 n=5 ec=58/47 lis/c=136/98 les/c/f=137/99/0 sis=138) [1] r=0 lpr=138 pi=[98,138)/1 crt=56'1130 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000084 1 0.000274
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 138 pg[9.1f( v 56'1130 (0'0,56'1130] local-lis/les=0/0 n=5 ec=58/47 lis/c=136/98 les/c/f=137/99/0 sis=138) [1] r=0 lpr=138 pi=[98,138)/1 crt=56'1130 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 06:58:52 compute-0 ceph-osd[85162]: merge_log_dups log.dups.size()=0olog.dups.size()=33
Nov 29 06:58:52 compute-0 ceph-osd[85162]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=33
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 138 pg[9.1f( v 56'1130 (0'0,56'1130] local-lis/les=136/137 n=5 ec=58/47 lis/c=136/98 les/c/f=137/99/0 sis=138) [1] r=0 lpr=138 pi=[98,138)/1 crt=56'1130 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001825 3 0.000127
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 138 pg[9.1f( v 56'1130 (0'0,56'1130] local-lis/les=136/137 n=5 ec=58/47 lis/c=136/98 les/c/f=137/99/0 sis=138) [1] r=0 lpr=138 pi=[98,138)/1 crt=56'1130 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 138 pg[9.1f( v 56'1130 (0'0,56'1130] local-lis/les=136/137 n=5 ec=58/47 lis/c=136/98 les/c/f=137/99/0 sis=138) [1] r=0 lpr=138 pi=[98,138)/1 crt=56'1130 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000007 0 0.000000
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 138 pg[9.1f( v 56'1130 (0'0,56'1130] local-lis/les=136/137 n=5 ec=58/47 lis/c=136/98 les/c/f=137/99/0 sis=138) [1] r=0 lpr=138 pi=[98,138)/1 crt=56'1130 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 6266880 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 138 handle_osd_map epochs [138,138], i have 138, src has [1,138]
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:25:33.922630+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 6266880 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 138 handle_osd_map epochs [139,139], i have 138, src has [1,139]
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 138 handle_osd_map epochs [139,139], i have 139, src has [1,139]
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:25:34.922979+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 139 pg[9.1f( v 56'1130 (0'0,56'1130] local-lis/les=136/137 n=5 ec=58/47 lis/c=136/98 les/c/f=137/99/0 sis=138) [1] r=0 lpr=138 pi=[98,138)/1 crt=56'1130 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 2.255414 2 0.000114
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 139 pg[9.1f( v 56'1130 (0'0,56'1130] local-lis/les=136/137 n=5 ec=58/47 lis/c=136/98 les/c/f=137/99/0 sis=138) [1] r=0 lpr=138 pi=[98,138)/1 crt=56'1130 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 2.257459 0 0.000000
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 139 pg[9.1f( v 56'1130 (0'0,56'1130] local-lis/les=136/137 n=5 ec=58/47 lis/c=136/98 les/c/f=137/99/0 sis=138) [1] r=0 lpr=138 pi=[98,138)/1 crt=56'1130 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 139 pg[9.1f( v 56'1130 (0'0,56'1130] local-lis/les=138/139 n=5 ec=58/47 lis/c=136/98 les/c/f=137/99/0 sis=138) [1] r=0 lpr=138 pi=[98,138)/1 crt=56'1130 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 6258688 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7a000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:25:35.923227+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 139 pg[9.1f( v 56'1130 (0'0,56'1130] local-lis/les=138/139 n=5 ec=58/47 lis/c=136/98 les/c/f=137/99/0 sis=138) [1] r=0 lpr=138 pi=[98,138)/1 crt=56'1130 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 139 pg[9.1f( v 56'1130 (0'0,56'1130] local-lis/les=138/139 n=5 ec=58/47 lis/c=138/98 les/c/f=139/99/0 sis=138) [1] r=0 lpr=138 pi=[98,138)/1 crt=56'1130 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.741571 4 0.000202
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 139 pg[9.1f( v 56'1130 (0'0,56'1130] local-lis/les=138/139 n=5 ec=58/47 lis/c=138/98 les/c/f=139/99/0 sis=138) [1] r=0 lpr=138 pi=[98,138)/1 crt=56'1130 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 139 pg[9.1f( v 56'1130 (0'0,56'1130] local-lis/les=138/139 n=5 ec=58/47 lis/c=138/98 les/c/f=139/99/0 sis=138) [1] r=0 lpr=138 pi=[98,138)/1 crt=56'1130 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000039 0 0.000000
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 pg_epoch: 139 pg[9.1f( v 56'1130 (0'0,56'1130] local-lis/les=138/139 n=5 ec=58/47 lis/c=138/98 les/c/f=139/99/0 sis=138) [1] r=0 lpr=138 pi=[98,138)/1 crt=56'1130 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 6258688 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:25:36.923737+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 6258688 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:25:37.924015+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 6250496 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:25:38.924308+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 6250496 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:25:39.924508+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 6242304 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:25:40.924693+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 6242304 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:25:41.924999+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 6242304 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:25:42.925301+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 6234112 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:25:43.925502+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 6234112 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:25:44.925725+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 6225920 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:25:45.925963+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 6225920 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:25:46.926285+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 6225920 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:25:47.926498+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 6217728 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:25:48.926760+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 6217728 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:25:49.926998+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 6209536 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:25:50.927333+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 6209536 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:25:51.927547+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81903616 unmapped: 6201344 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:25:52.927803+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81903616 unmapped: 6201344 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:25:53.928054+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 6193152 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:25:54.928779+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 6193152 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:25:55.929000+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 6193152 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:25:56.929170+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81920000 unmapped: 6184960 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:25:57.929354+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81928192 unmapped: 6176768 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:25:58.929497+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81928192 unmapped: 6176768 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:25:59.937947+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 6168576 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:26:00.938189+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 6168576 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:26:01.938382+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 6168576 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:26:02.938595+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 6160384 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:26:03.938855+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 6160384 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:26:04.939096+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 6152192 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:26:05.939317+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 6152192 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:26:06.939669+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 6144000 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:26:07.939909+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 6144000 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:26:08.940100+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 6144000 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:26:09.940322+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 6135808 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:26:10.940534+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 6135808 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:26:11.940693+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 6135808 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:26:12.940865+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 6127616 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:26:13.941114+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 6127616 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:26:14.941287+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 6119424 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:26:15.941484+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 6119424 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:26:16.941608+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 6111232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:26:17.941765+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 6111232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:26:18.941965+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 6111232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:26:19.942183+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 6103040 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:26:20.942347+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 6103040 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:26:21.943028+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 6094848 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:26:22.943252+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 6094848 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:26:23.943440+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 6094848 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:26:24.943572+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 6086656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:26:25.943956+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 6086656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:26:26.944149+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 6078464 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:26:27.944278+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 6078464 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:26:28.944497+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 6078464 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:26:29.944723+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 6070272 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:26:30.944911+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 6070272 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:26:31.945097+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 6062080 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:26:32.945251+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 6062080 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:26:33.945413+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82051072 unmapped: 6053888 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:26:34.945564+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82051072 unmapped: 6053888 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:26:35.945840+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82051072 unmapped: 6053888 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:26:36.946055+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82059264 unmapped: 6045696 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:26:37.946255+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82059264 unmapped: 6045696 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:26:38.946425+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82067456 unmapped: 6037504 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:26:39.946744+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82067456 unmapped: 6037504 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:26:40.946934+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82067456 unmapped: 6037504 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:26:41.947116+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82075648 unmapped: 6029312 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:26:42.947269+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 6021120 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:26:43.947436+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82092032 unmapped: 6012928 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:26:44.947609+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82092032 unmapped: 6012928 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:26:45.947791+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82100224 unmapped: 6004736 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:26:46.947940+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82100224 unmapped: 6004736 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:26:47.954645+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82100224 unmapped: 6004736 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:26:48.954795+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82108416 unmapped: 5996544 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:26:49.954946+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82108416 unmapped: 5996544 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:26:50.955064+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82108416 unmapped: 5996544 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:26:51.955188+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82116608 unmapped: 5988352 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:26:52.955316+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82116608 unmapped: 5988352 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:26:53.955490+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82124800 unmapped: 5980160 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:26:54.955615+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82124800 unmapped: 5980160 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:26:55.955823+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82141184 unmapped: 5963776 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:26:56.956052+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82141184 unmapped: 5963776 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:26:57.956228+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82141184 unmapped: 5963776 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:26:58.956418+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82149376 unmapped: 5955584 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:26:59.956649+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82149376 unmapped: 5955584 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:27:00.956836+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82149376 unmapped: 5955584 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:27:01.957043+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82157568 unmapped: 5947392 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:27:02.957191+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82157568 unmapped: 5947392 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:27:03.957428+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82165760 unmapped: 5939200 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:27:04.957567+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82165760 unmapped: 5939200 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:27:05.957758+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 5931008 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:27:06.957951+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 5931008 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:27:07.958105+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 5931008 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:27:08.958281+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82182144 unmapped: 5922816 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:27:09.958450+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82182144 unmapped: 5922816 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:27:10.958588+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82182144 unmapped: 5922816 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:27:11.958764+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 5914624 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:27:12.958956+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 5914624 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:27:13.959127+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 5914624 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:27:14.959261+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 5906432 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:27:15.959403+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 5906432 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:27:16.959691+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82206720 unmapped: 5898240 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:27:17.959978+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82206720 unmapped: 5898240 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:27:18.960214+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82214912 unmapped: 5890048 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:27:19.960442+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82214912 unmapped: 5890048 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:27:20.960586+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82214912 unmapped: 5890048 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:27:21.960746+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82223104 unmapped: 5881856 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:27:22.960924+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:27:23.961232+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82223104 unmapped: 5881856 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:27:24.961397+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 5873664 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:27:25.961613+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 5873664 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:27:26.961787+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82239488 unmapped: 5865472 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:27:27.961961+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82239488 unmapped: 5865472 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:27:28.962105+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82247680 unmapped: 5857280 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:27:29.962376+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82247680 unmapped: 5857280 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:27:30.962582+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82255872 unmapped: 5849088 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:27:31.962766+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82255872 unmapped: 5849088 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:27:32.962975+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82264064 unmapped: 5840896 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:27:33.963127+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82264064 unmapped: 5840896 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:27:34.963381+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 5832704 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:27:35.963572+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 5832704 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:27:36.963727+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 5832704 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:27:37.963869+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82280448 unmapped: 5824512 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:27:38.964111+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82280448 unmapped: 5824512 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:27:39.964379+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82288640 unmapped: 5816320 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:27:40.964521+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82288640 unmapped: 5816320 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:27:41.964678+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82288640 unmapped: 5816320 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:27:42.964832+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82296832 unmapped: 5808128 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:27:43.964955+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82296832 unmapped: 5808128 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:27:44.965111+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82305024 unmapped: 5799936 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:27:45.965289+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82305024 unmapped: 5799936 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:27:46.965433+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82305024 unmapped: 5799936 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:27:47.965626+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82313216 unmapped: 5791744 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:27:48.965768+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82313216 unmapped: 5791744 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:27:49.966023+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82321408 unmapped: 5783552 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:27:50.966206+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82321408 unmapped: 5783552 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:27:51.966377+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82329600 unmapped: 5775360 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:27:52.966536+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82329600 unmapped: 5775360 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:27:53.966736+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82329600 unmapped: 5775360 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:27:54.966948+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82337792 unmapped: 5767168 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:27:55.967083+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82337792 unmapped: 5767168 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:27:56.967239+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82337792 unmapped: 5767168 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:27:57.967378+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82345984 unmapped: 5758976 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:27:58.967515+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82345984 unmapped: 5758976 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:27:59.967712+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82362368 unmapped: 5742592 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:28:00.967872+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82362368 unmapped: 5742592 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:28:01.968119+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82370560 unmapped: 5734400 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:28:02.968310+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82370560 unmapped: 5734400 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:28:03.968445+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82370560 unmapped: 5734400 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:28:04.968595+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 5726208 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.8 total, 600.0 interval
                                           Cumulative writes: 7884 writes, 33K keys, 7884 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 7884 writes, 1451 syncs, 5.43 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 7884 writes, 33K keys, 7884 commit groups, 1.0 writes per commit group, ingest: 20.94 MB, 0.03 MB/s
                                           Interval WAL: 7884 writes, 1451 syncs, 5.43 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.22              0.00         1    0.219       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.22              0.00         1    0.219       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.22              0.00         1    0.219       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.8 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.2 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5633efb6d610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.8 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5633efb6d610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.8 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5633efb6d610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.8 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5633efb6d610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.29              0.00         1    0.294       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.29              0.00         1    0.294       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.29              0.00         1    0.294       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.8 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.3 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5633efb6d610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.8 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5633efb6d610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.8 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5633efb6d610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.8 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5633efb6d770#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.8 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5633efb6d770#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.23              0.00         1    0.228       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.23              0.00         1    0.228       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.23              0.00         1    0.228       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.8 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.2 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5633efb6d770#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.8 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5633efb6d610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.8 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5633efb6d610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:28:05.968727+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 5660672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:28:06.968983+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 5660672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:28:07.969128+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82452480 unmapped: 5652480 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:28:08.969251+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82452480 unmapped: 5652480 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:28:09.969455+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82460672 unmapped: 5644288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:28:10.969685+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82460672 unmapped: 5644288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:28:11.969917+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82460672 unmapped: 5644288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:28:12.970120+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82468864 unmapped: 5636096 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:28:13.970369+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82477056 unmapped: 5627904 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:28:14.970660+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82485248 unmapped: 5619712 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:28:15.970998+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82485248 unmapped: 5619712 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:28:16.971221+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82493440 unmapped: 5611520 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:28:17.971485+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82493440 unmapped: 5611520 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:28:18.971679+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82493440 unmapped: 5611520 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:28:19.971919+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82501632 unmapped: 5603328 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:28:20.972071+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82501632 unmapped: 5603328 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:28:21.972208+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82501632 unmapped: 5603328 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:28:22.972380+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82526208 unmapped: 5578752 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:28:23.972539+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82526208 unmapped: 5578752 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:28:24.972683+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82534400 unmapped: 5570560 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:28:25.972828+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82534400 unmapped: 5570560 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:28:26.972955+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82534400 unmapped: 5570560 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:28:27.973237+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82542592 unmapped: 5562368 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:28:28.973488+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82542592 unmapped: 5562368 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:28:29.973796+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82550784 unmapped: 5554176 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:28:30.973946+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82550784 unmapped: 5554176 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:28:31.974255+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82558976 unmapped: 5545984 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:28:32.974382+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82558976 unmapped: 5545984 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:28:33.974700+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82558976 unmapped: 5545984 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:28:34.974839+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82567168 unmapped: 5537792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:28:35.974989+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82567168 unmapped: 5537792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:28:36.975158+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82567168 unmapped: 5537792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:28:37.975329+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82575360 unmapped: 5529600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:28:38.975519+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82575360 unmapped: 5529600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:28:39.975745+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82583552 unmapped: 5521408 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:28:40.975969+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82583552 unmapped: 5521408 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:28:41.976264+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82591744 unmapped: 5513216 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:28:42.976507+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82591744 unmapped: 5513216 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:28:43.977085+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82591744 unmapped: 5513216 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:28:44.977430+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82599936 unmapped: 5505024 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:28:45.977720+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82599936 unmapped: 5505024 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:28:46.977985+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82599936 unmapped: 5505024 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:28:47.978113+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82608128 unmapped: 5496832 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:28:48.978259+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82608128 unmapped: 5496832 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:28:49.978457+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82616320 unmapped: 5488640 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:28:50.978621+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82616320 unmapped: 5488640 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:28:51.978798+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82616320 unmapped: 5488640 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:28:52.978966+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82624512 unmapped: 5480448 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:28:53.979231+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82624512 unmapped: 5480448 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:28:54.979441+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 5472256 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:28:55.979695+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 5472256 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:28:56.979960+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82640896 unmapped: 5464064 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:28:57.980192+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82649088 unmapped: 5455872 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:28:58.980325+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82649088 unmapped: 5455872 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:28:59.980551+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82657280 unmapped: 5447680 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:29:00.980728+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82657280 unmapped: 5447680 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:29:01.980863+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82657280 unmapped: 5447680 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:29:02.981008+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82665472 unmapped: 5439488 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:29:03.981120+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82665472 unmapped: 5439488 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:29:04.981696+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82673664 unmapped: 5431296 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:29:05.981926+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82673664 unmapped: 5431296 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:29:06.982117+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82681856 unmapped: 5423104 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:29:07.982282+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82681856 unmapped: 5423104 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:29:08.982433+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82681856 unmapped: 5423104 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:29:09.982640+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82690048 unmapped: 5414912 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:29:10.982807+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82690048 unmapped: 5414912 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:29:11.983042+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82698240 unmapped: 5406720 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:29:12.983461+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82698240 unmapped: 5406720 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:29:13.983623+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82698240 unmapped: 5406720 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:29:14.983928+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82706432 unmapped: 5398528 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:29:15.984100+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82706432 unmapped: 5398528 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:29:16.984246+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82714624 unmapped: 5390336 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:29:17.984466+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82714624 unmapped: 5390336 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:29:18.984741+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82722816 unmapped: 5382144 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:29:19.984988+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82722816 unmapped: 5382144 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:29:20.985178+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 5373952 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:29:21.985378+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 5373952 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:29:22.985699+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 5373952 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:29:23.985840+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82739200 unmapped: 5365760 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:29:24.989641+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82739200 unmapped: 5365760 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:29:25.989803+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82747392 unmapped: 5357568 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:29:26.989979+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82747392 unmapped: 5357568 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:29:27.990136+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82755584 unmapped: 5349376 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:29:28.990949+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82755584 unmapped: 5349376 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:29:29.991142+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82755584 unmapped: 5349376 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:29:30.991277+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82755584 unmapped: 5349376 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:29:31.991394+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82763776 unmapped: 5341184 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:29:32.991580+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82763776 unmapped: 5341184 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:29:33.991723+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82771968 unmapped: 5332992 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:29:34.991960+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82771968 unmapped: 5332992 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:29:35.992102+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 5324800 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:29:36.992319+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 5324800 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:29:37.992483+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 5324800 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:29:38.992633+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82796544 unmapped: 5308416 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:29:39.992823+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82796544 unmapped: 5308416 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:29:40.993029+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82812928 unmapped: 5292032 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:29:41.993195+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82812928 unmapped: 5292032 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:29:42.993368+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82821120 unmapped: 5283840 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:29:43.993601+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82821120 unmapped: 5283840 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:29:44.993776+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82821120 unmapped: 5283840 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:29:45.994011+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82829312 unmapped: 5275648 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:29:46.994180+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82829312 unmapped: 5275648 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:29:47.994326+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82837504 unmapped: 5267456 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:29:48.994459+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82845696 unmapped: 5259264 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:29:49.994661+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82845696 unmapped: 5259264 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:29:50.994779+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82853888 unmapped: 5251072 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:29:51.994924+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82853888 unmapped: 5251072 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:29:52.995053+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82862080 unmapped: 5242880 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:29:53.995233+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82862080 unmapped: 5242880 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:29:54.995348+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82862080 unmapped: 5242880 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:29:55.995478+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82870272 unmapped: 5234688 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:29:56.995737+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82870272 unmapped: 5234688 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:29:57.995908+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82878464 unmapped: 5226496 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:29:58.996117+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82878464 unmapped: 5226496 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:29:59.996261+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82886656 unmapped: 5218304 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:30:00.996396+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82886656 unmapped: 5218304 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:30:01.996605+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82886656 unmapped: 5218304 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:30:02.996746+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82894848 unmapped: 5210112 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:30:03.996946+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82894848 unmapped: 5210112 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:30:04.997170+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82894848 unmapped: 5210112 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:30:05.997490+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82903040 unmapped: 5201920 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:30:06.997667+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82903040 unmapped: 5201920 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:30:07.998021+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82903040 unmapped: 5201920 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:30:08.998176+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82903040 unmapped: 5201920 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:30:09.998358+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82903040 unmapped: 5201920 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:30:10.998593+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82903040 unmapped: 5201920 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:30:11.998802+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82903040 unmapped: 5201920 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:30:12.998995+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82903040 unmapped: 5201920 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:30:13.999272+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82903040 unmapped: 5201920 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:30:14.999461+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82903040 unmapped: 5201920 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:30:15.999844+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82903040 unmapped: 5201920 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:30:17.000118+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82903040 unmapped: 5201920 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:30:18.000362+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82911232 unmapped: 5193728 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:30:19.000654+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82911232 unmapped: 5193728 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:30:20.001054+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82911232 unmapped: 5193728 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:30:21.001196+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82911232 unmapped: 5193728 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:30:22.001376+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82911232 unmapped: 5193728 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:30:23.001510+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82911232 unmapped: 5193728 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:30:24.001658+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82911232 unmapped: 5193728 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:30:25.001870+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82911232 unmapped: 5193728 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:30:26.002086+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82911232 unmapped: 5193728 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:30:27.002262+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82911232 unmapped: 5193728 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:30:28.002456+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82911232 unmapped: 5193728 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:30:29.002605+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82911232 unmapped: 5193728 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:30:30.002746+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 82911232 unmapped: 5193728 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 294.404235840s of 297.958953857s, submitted: 12
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:30:31.002903+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83001344 unmapped: 5103616 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:30:32.003167+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,2])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83083264 unmapped: 5021696 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:30:33.003487+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917400 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83083264 unmapped: 5021696 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:30:34.003676+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83132416 unmapped: 4972544 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:30:35.003934+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83132416 unmapped: 4972544 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:30:36.004162+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83107840 unmapped: 4997120 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:30:37.004310+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83124224 unmapped: 4980736 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:30:38.004490+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917328 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83124224 unmapped: 4980736 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:30:39.004793+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bca7c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 4939776 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:30:40.005129+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83156992 unmapped: 4947968 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:30:41.005273+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83156992 unmapped: 4947968 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:30:42.005428+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83156992 unmapped: 4947968 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:30:43.005639+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 4939776 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:30:44.005763+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 4939776 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:30:45.005980+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 4939776 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:30:46.006147+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 4939776 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:30:47.007703+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 4939776 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:30:48.009622+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 4939776 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:30:49.010050+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 4939776 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:30:50.010232+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 4939776 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:30:51.011808+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 4939776 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:30:52.011994+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 4939776 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:30:53.012168+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 4939776 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:30:54.012347+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 4939776 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:30:55.012507+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 4939776 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:30:56.012942+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 4939776 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:30:57.013097+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 4939776 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:30:58.013422+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 4939776 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:30:59.013555+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 4939776 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:31:00.013701+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 4939776 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:31:01.013834+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 4939776 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:31:02.013938+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 4939776 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:31:03.014097+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 4939776 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:31:04.014280+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 4939776 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:31:05.014408+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 4939776 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:31:06.014547+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 4939776 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:31:07.014777+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 4939776 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:31:08.015002+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 4939776 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:31:09.015158+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 4939776 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:31:10.015436+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 4939776 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:31:11.015591+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 4939776 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:31:12.015795+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 4939776 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:31:13.015980+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 4939776 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:31:14.016232+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 4939776 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:31:15.016391+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 4939776 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:31:16.016611+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 4939776 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:31:17.016772+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 4939776 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:31:18.017787+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 4939776 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:31:19.017954+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 4939776 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:31:20.018110+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 4939776 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:31:21.018244+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 4939776 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:31:22.018352+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 4939776 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:31:23.018641+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 4939776 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:31:24.018766+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 4939776 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:31:25.020043+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:31:26.020527+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 4939776 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:31:27.021843+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 4939776 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:31:28.022012+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 4939776 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:31:29.022565+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83173376 unmapped: 4931584 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:31:30.022890+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83173376 unmapped: 4931584 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:31:31.023205+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83173376 unmapped: 4931584 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:31:32.023387+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83173376 unmapped: 4931584 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:31:33.023589+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83173376 unmapped: 4931584 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:31:34.023778+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83173376 unmapped: 4931584 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:31:35.024022+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83173376 unmapped: 4931584 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:31:36.024312+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83173376 unmapped: 4931584 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:31:37.024476+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83173376 unmapped: 4931584 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:31:38.024635+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83173376 unmapped: 4931584 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:31:39.024995+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83173376 unmapped: 4931584 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:31:40.025258+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83173376 unmapped: 4931584 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:31:41.025446+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83173376 unmapped: 4931584 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:31:42.025838+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83173376 unmapped: 4931584 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:31:43.025955+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83173376 unmapped: 4931584 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:31:44.026290+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83173376 unmapped: 4931584 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:31:45.026490+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83173376 unmapped: 4931584 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:31:46.026682+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83173376 unmapped: 4931584 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:31:47.026949+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83173376 unmapped: 4931584 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:31:48.027124+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83173376 unmapped: 4931584 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:31:49.027352+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83173376 unmapped: 4931584 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:31:50.027638+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83173376 unmapped: 4931584 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:31:51.027776+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83173376 unmapped: 4931584 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:31:52.028040+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83173376 unmapped: 4931584 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:31:53.028187+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83173376 unmapped: 4931584 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:31:54.028349+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83173376 unmapped: 4931584 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:31:55.028514+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83173376 unmapped: 4931584 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:31:56.028670+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83173376 unmapped: 4931584 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:31:57.028819+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83173376 unmapped: 4931584 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:31:58.029000+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83173376 unmapped: 4931584 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:31:59.029146+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83173376 unmapped: 4931584 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:32:00.029453+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83173376 unmapped: 4931584 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:32:01.029559+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83173376 unmapped: 4931584 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:32:02.029671+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83173376 unmapped: 4931584 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:32:03.029774+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83173376 unmapped: 4931584 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:32:04.029946+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83173376 unmapped: 4931584 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:32:05.030107+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83173376 unmapped: 4931584 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:32:06.030250+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83173376 unmapped: 4931584 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:32:07.030367+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83173376 unmapped: 4931584 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:32:08.030503+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83173376 unmapped: 4931584 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:32:09.030650+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83173376 unmapped: 4931584 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:32:10.030849+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83173376 unmapped: 4931584 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:32:11.030987+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83173376 unmapped: 4931584 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:32:12.031102+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83173376 unmapped: 4931584 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:32:13.031284+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83181568 unmapped: 4923392 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:32:14.031444+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83181568 unmapped: 4923392 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:32:15.031599+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83181568 unmapped: 4923392 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:32:16.031828+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83181568 unmapped: 4923392 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:32:17.031928+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83181568 unmapped: 4923392 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:32:18.032103+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83181568 unmapped: 4923392 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:32:19.032323+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83181568 unmapped: 4923392 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:32:20.032564+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83181568 unmapped: 4923392 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:32:21.032743+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83181568 unmapped: 4923392 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:32:22.033000+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83181568 unmapped: 4923392 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:32:23.033185+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83181568 unmapped: 4923392 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:32:24.033348+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83181568 unmapped: 4923392 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:32:25.033507+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83181568 unmapped: 4923392 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:32:26.033764+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83181568 unmapped: 4923392 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:32:27.036476+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83181568 unmapped: 4923392 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:32:28.036613+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83181568 unmapped: 4923392 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:32:29.036836+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83181568 unmapped: 4923392 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:32:30.037116+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83181568 unmapped: 4923392 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:32:31.037249+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83181568 unmapped: 4923392 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:32:32.037378+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83181568 unmapped: 4923392 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:32:33.037507+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83181568 unmapped: 4923392 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:32:34.037950+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83181568 unmapped: 4923392 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:32:35.038114+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83181568 unmapped: 4923392 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:32:36.038368+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83181568 unmapped: 4923392 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:32:37.038501+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83181568 unmapped: 4923392 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:32:38.038749+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83181568 unmapped: 4923392 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:32:39.038935+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83181568 unmapped: 4923392 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:32:40.039112+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83181568 unmapped: 4923392 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:32:41.039246+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83189760 unmapped: 4915200 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:32:42.039437+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83189760 unmapped: 4915200 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:32:43.039574+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83189760 unmapped: 4915200 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:32:44.039869+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83197952 unmapped: 4907008 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:32:45.040026+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83197952 unmapped: 4907008 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:32:46.040203+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83197952 unmapped: 4907008 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:32:47.040367+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83197952 unmapped: 4907008 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:32:48.040503+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83197952 unmapped: 4907008 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:32:49.040688+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83206144 unmapped: 4898816 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:32:50.041036+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83206144 unmapped: 4898816 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:32:51.041314+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83206144 unmapped: 4898816 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:32:52.041653+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83206144 unmapped: 4898816 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:32:53.041985+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83206144 unmapped: 4898816 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:32:54.042243+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83206144 unmapped: 4898816 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:32:55.042519+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83206144 unmapped: 4898816 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:32:56.044250+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83206144 unmapped: 4898816 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:32:57.046642+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83206144 unmapped: 4898816 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:32:58.047373+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83214336 unmapped: 4890624 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:32:59.048231+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83214336 unmapped: 4890624 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:33:00.049010+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83214336 unmapped: 4890624 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:33:01.050096+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83214336 unmapped: 4890624 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:33:02.050583+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83214336 unmapped: 4890624 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:33:03.051394+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83214336 unmapped: 4890624 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:33:04.051704+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83214336 unmapped: 4890624 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:33:05.051927+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83214336 unmapped: 4890624 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:33:06.052350+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83214336 unmapped: 4890624 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:33:07.052492+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83214336 unmapped: 4890624 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:33:08.052924+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83214336 unmapped: 4890624 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:33:09.053105+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83214336 unmapped: 4890624 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:33:10.053276+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83214336 unmapped: 4890624 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:33:11.053764+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83214336 unmapped: 4890624 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:33:12.054189+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83214336 unmapped: 4890624 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:33:13.054591+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83214336 unmapped: 4890624 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:33:14.054945+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83214336 unmapped: 4890624 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:33:15.055281+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83214336 unmapped: 4890624 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:33:16.055437+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83214336 unmapped: 4890624 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:33:17.055567+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83214336 unmapped: 4890624 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:33:18.055692+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83214336 unmapped: 4890624 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:33:19.055823+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83214336 unmapped: 4890624 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:33:20.055943+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: mgrc ms_handle_reset ms_handle_reset con 0x5633f09adc00
Nov 29 06:58:52 compute-0 ceph-osd[85162]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/1221624088
Nov 29 06:58:52 compute-0 ceph-osd[85162]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/1221624088,v1:192.168.122.100:6801/1221624088]
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: get_auth_request con 0x5633f3398800 auth_method 0
Nov 29 06:58:52 compute-0 ceph-osd[85162]: mgrc handle_mgr_configure stats_period=5
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 4685824 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:33:21.056189+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 4685824 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:33:22.056389+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 4685824 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 ms_handle_reset con 0x5633f13f6c00 session 0x5633f0947c20
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: handle_auth_request added challenge on 0x5633f3a4e800
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:33:23.056527+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 4685824 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:33:24.056655+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 4685824 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:33:25.056926+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 4644864 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:33:26.057172+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 4644864 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:33:27.057369+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 4644864 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:33:28.058499+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 4644864 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:33:29.059443+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 4644864 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:33:30.060150+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 4661248 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:33:31.060831+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 4661248 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:33:32.061389+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 4661248 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:33:33.061756+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 4661248 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:33:34.062092+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 4661248 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:33:35.062357+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 4661248 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:33:36.062577+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 4661248 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:33:37.063012+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 4661248 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:33:38.063457+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 4661248 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:33:39.063668+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 4661248 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:33:40.063962+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 4653056 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:33:41.064236+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 4653056 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:33:42.064434+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 4653056 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:33:43.064560+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 4653056 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:33:44.064737+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 4653056 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:33:45.064903+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 4653056 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:33:46.065066+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 4653056 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:33:47.065195+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 4653056 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:33:48.065424+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 4653056 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:33:49.065585+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 4653056 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:33:50.065789+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 4653056 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:33:51.065991+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 4653056 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:33:52.066181+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 4653056 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:33:53.066575+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 4653056 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:33:54.066899+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 4653056 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:33:55.067031+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 4653056 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:33:56.067186+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 4653056 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:33:57.067323+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 4653056 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:33:58.067446+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 4653056 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:33:59.067646+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 4653056 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:34:00.067917+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 4644864 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:34:01.068126+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 4644864 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:34:02.068296+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 4644864 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:34:03.068538+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 4644864 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:34:04.068736+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 4644864 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:34:05.068986+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 4644864 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:34:06.069176+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 4644864 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:34:07.069304+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 4644864 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:34:08.069439+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 4644864 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:34:09.069578+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 4644864 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:34:10.069751+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 4644864 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:34:11.070035+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 4644864 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:34:12.070271+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 4644864 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:34:13.070517+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 4644864 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:34:14.070692+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 4644864 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:34:15.070885+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:34:16.071242+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:34:17.071468+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:34:18.071712+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:34:19.071961+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:34:20.072289+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:34:21.072646+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:34:22.072981+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:34:23.073269+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:34:24.073458+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:34:25.073655+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:34:26.073860+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:34:27.074094+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:34:28.074338+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:34:29.074550+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:34:30.074837+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:34:31.075166+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:34:32.075434+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:34:33.075972+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:34:34.076455+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:34:35.078144+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:34:36.078704+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:34:37.080929+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:34:38.081491+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:34:39.082109+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:34:40.082490+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:34:41.083751+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:34:42.084164+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:34:43.085191+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:34:44.089986+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:34:45.090222+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:34:46.090367+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:34:47.090502+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:34:48.090929+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:34:49.091256+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:34:50.091543+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:34:51.091793+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:34:52.092149+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:34:53.092449+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:34:54.092749+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:34:55.092918+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:34:56.093054+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:34:57.093809+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:34:58.094043+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:34:59.094221+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:35:00.094476+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:35:01.094727+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:35:02.095000+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:35:03.095310+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:35:04.095553+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:35:05.095835+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:35:06.096145+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:35:07.096367+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:35:08.096578+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:35:09.096819+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:35:10.097093+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:35:11.097335+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:35:12.097593+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:35:13.098239+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:35:14.098498+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:35:15.098654+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:35:16.098954+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:35:17.099250+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:35:18.099571+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:35:19.099825+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:35:20.100186+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:35:21.100383+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:35:22.100634+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:35:23.100861+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:35:24.101101+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:35:25.101305+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:35:26.101612+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:35:27.102022+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:35:28.102212+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 4636672 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:35:29.102511+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 4628480 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:35:30.103051+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 4628480 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:35:31.103376+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:35:32.103673+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:35:33.107370+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:35:34.107826+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:35:35.108262+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:35:36.108473+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:35:37.108799+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:35:38.109365+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:35:39.109644+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:35:40.109996+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:35:41.110276+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:35:42.110491+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:35:43.110674+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:35:44.111020+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:35:45.111243+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:35:46.111454+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:35:47.111823+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:35:48.111965+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:35:49.112221+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:35:50.112536+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:35:51.112799+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:35:52.113296+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:35:53.113583+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:35:54.113823+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:35:55.114015+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:35:56.114205+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:35:57.114421+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:35:58.114584+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:35:59.114754+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:36:00.114993+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:36:01.115150+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:36:02.115323+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:36:03.115465+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:36:04.115702+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:36:05.115896+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:36:06.116154+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:36:07.116357+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:36:08.116591+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:36:09.116954+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:36:10.117203+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:36:11.117435+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:36:12.117649+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:36:13.117913+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:36:14.118075+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:36:15.118293+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:36:16.118417+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:36:17.118629+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:36:18.118837+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:36:19.118974+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:36:20.119174+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:36:21.119344+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:36:22.119492+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:36:23.119733+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:36:24.119907+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:36:25.120064+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:36:26.120309+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:36:27.120612+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:36:28.120828+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:36:29.121066+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:36:30.121276+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:36:31.121471+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:36:32.121601+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:36:33.121754+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:36:34.121933+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:36:35.122218+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:36:36.122416+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:36:37.122771+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:36:38.122992+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:36:39.123231+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:36:40.123479+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:36:41.123610+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:36:42.123777+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:36:43.123930+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:36:44.124125+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:36:45.124459+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:36:46.124588+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:36:47.124705+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:36:48.124841+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:36:49.124983+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:36:50.125162+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:36:51.125329+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:36:52.125488+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:36:53.125742+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:36:54.125907+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:36:55.126049+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:36:56.126211+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:36:57.126384+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:36:58.126544+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:36:59.126727+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:37:00.126866+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:37:01.127049+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:37:02.127181+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:37:03.127345+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:37:04.127519+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:37:05.127827+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:52 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:52 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:37:06.128101+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:37:07.128338+0000)
Nov 29 06:58:52 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:52 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:52 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:37:08.128511+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:37:09.128685+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:37:10.128940+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:37:11.129061+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:37:12.129194+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:37:13.129398+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:37:14.129951+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:37:15.130119+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:37:16.130247+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:37:17.130380+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:37:18.130521+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:37:19.130656+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:37:20.130842+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:37:21.130945+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:37:22.131085+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:37:23.131233+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:37:24.131353+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:37:25.131490+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:37:26.131643+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:37:27.131815+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:37:28.131955+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:37:29.132125+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:37:30.132298+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:37:31.132463+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:37:32.132670+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:37:33.132804+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:37:34.132960+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:37:35.133104+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:37:36.133294+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:37:37.133460+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:37:38.133601+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:37:39.133775+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 4620288 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:37:40.133960+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 4612096 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:37:41.134104+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 4612096 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:37:42.134272+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 4612096 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:37:43.134390+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 4612096 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:37:44.134545+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 4612096 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:37:45.134693+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 4612096 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:37:46.134836+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 4612096 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:37:47.135020+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 4612096 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:37:48.135188+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 4612096 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:37:49.135294+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 4612096 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:37:50.135508+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 4612096 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:37:51.135664+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 4612096 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:37:52.135826+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 4612096 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:37:53.135946+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 4612096 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:37:54.136121+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 4612096 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:37:55.136321+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 4612096 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:37:56.136467+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 4612096 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:37:57.136600+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 4612096 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:37:58.136823+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 4612096 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:37:59.137025+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 4603904 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:38:00.137259+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 4603904 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:38:01.137416+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 4603904 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:38:02.137545+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 4603904 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:38:03.137694+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 4603904 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:38:04.137837+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 4603904 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:38:05.137961+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.8 total, 600.0 interval
                                           Cumulative writes: 8512 writes, 34K keys, 8512 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 8512 writes, 1746 syncs, 4.88 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 628 writes, 988 keys, 628 commit groups, 1.0 writes per commit group, ingest: 0.32 MB, 0.00 MB/s
                                           Interval WAL: 628 writes, 295 syncs, 2.13 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.22              0.00         1    0.219       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.22              0.00         1    0.219       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.22              0.00         1    0.219       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.8 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.2 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5633efb6d610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.8 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5633efb6d610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.8 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5633efb6d610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.8 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5633efb6d610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.29              0.00         1    0.294       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.29              0.00         1    0.294       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.29              0.00         1    0.294       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.8 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.3 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5633efb6d610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.8 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5633efb6d610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.8 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5633efb6d610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.8 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5633efb6d770#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.8 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5633efb6d770#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.23              0.00         1    0.228       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.23              0.00         1    0.228       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.23              0.00         1    0.228       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.8 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.2 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5633efb6d770#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.8 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5633efb6d610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.8 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5633efb6d610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 4571136 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:38:06.138094+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 4571136 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:38:07.138290+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 4571136 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:38:08.138444+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 4571136 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:38:09.138611+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 4571136 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:38:10.138815+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 4571136 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:38:11.139023+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 4571136 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:38:12.139186+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 4571136 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:38:13.139306+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 4571136 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:38:14.139418+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 4571136 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:38:15.139545+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 4571136 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:38:16.139684+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 4571136 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:38:17.139863+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 4571136 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:38:18.141485+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 4571136 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:38:19.141621+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 4571136 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:38:20.141813+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 4571136 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:38:21.141958+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 4571136 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:38:22.142093+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 4571136 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:38:23.142242+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 4571136 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:38:24.142950+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 4571136 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:38:25.143137+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 4571136 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:38:26.143286+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 4571136 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:38:27.143698+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 4571136 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:38:28.144027+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 4571136 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:38:29.144173+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 4571136 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:38:30.144360+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 4571136 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:38:31.144595+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 4571136 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:38:32.144812+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:38:33.144949+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 4571136 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:38:34.145216+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 4571136 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:38:35.145488+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 4571136 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:38:36.145697+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 4571136 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:38:37.145904+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 4571136 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:38:38.146059+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 4571136 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:38:39.146296+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 4571136 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:38:40.146711+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 4571136 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:38:41.146851+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 4571136 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:38:42.147047+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 4571136 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:38:43.157156+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 4571136 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:38:44.157292+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 4571136 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:38:45.157426+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 4571136 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:38:46.157652+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 4571136 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:38:47.157820+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 4562944 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:38:48.157953+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 4562944 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:38:49.158133+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 4562944 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:38:50.158294+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 4562944 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:38:51.158418+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 4562944 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:38:52.158773+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 4562944 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:38:53.158941+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 4562944 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:38:54.159139+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 4562944 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:38:55.159462+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 4562944 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:38:56.159633+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 4562944 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:38:57.159786+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 4562944 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:38:58.159985+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 4562944 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:38:59.160211+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 4562944 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:39:00.160444+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 4562944 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:39:01.160608+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 4562944 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:39:02.160770+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 4562944 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:39:03.160909+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 4562944 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:39:04.161050+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 4562944 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:39:05.161224+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 4562944 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:39:06.161476+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 4562944 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:39:07.161710+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 4562944 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:39:08.161922+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 4562944 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:39:09.162048+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 4562944 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:39:10.162329+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 4562944 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:39:11.162558+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 4562944 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:39:12.162841+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 4562944 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:39:13.163072+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 4562944 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:39:14.163238+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 4562944 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:39:15.163462+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 4562944 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:39:16.163723+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83550208 unmapped: 4554752 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:39:17.163932+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83550208 unmapped: 4554752 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:39:18.164133+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83550208 unmapped: 4554752 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:39:19.164319+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83550208 unmapped: 4554752 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:39:20.164503+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83550208 unmapped: 4554752 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:39:21.164739+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83550208 unmapped: 4554752 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:39:22.164950+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83550208 unmapped: 4554752 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:39:23.165231+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83550208 unmapped: 4554752 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:39:24.165455+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83550208 unmapped: 4554752 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:39:25.165727+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83550208 unmapped: 4554752 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:39:26.166338+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83550208 unmapped: 4554752 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:39:27.166797+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83550208 unmapped: 4554752 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:39:28.167120+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83558400 unmapped: 4546560 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:39:29.167264+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83558400 unmapped: 4546560 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:39:30.167456+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83558400 unmapped: 4546560 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:39:31.167676+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83558400 unmapped: 4546560 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:39:32.167843+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83558400 unmapped: 4546560 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:39:33.167937+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83558400 unmapped: 4546560 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:39:34.168067+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83558400 unmapped: 4546560 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:39:35.168254+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83558400 unmapped: 4546560 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:39:36.168421+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83558400 unmapped: 4546560 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:39:37.168598+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83558400 unmapped: 4546560 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:39:38.168774+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83558400 unmapped: 4546560 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:39:39.168979+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83558400 unmapped: 4546560 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:39:40.169174+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83558400 unmapped: 4546560 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:39:41.169320+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83558400 unmapped: 4546560 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:39:42.169455+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83558400 unmapped: 4546560 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:39:43.169644+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83558400 unmapped: 4546560 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:39:44.169834+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83558400 unmapped: 4546560 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:39:45.169939+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83558400 unmapped: 4546560 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:39:46.170147+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83558400 unmapped: 4546560 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:39:47.170320+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83558400 unmapped: 4546560 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:39:48.170501+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83558400 unmapped: 4546560 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:39:49.170643+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83558400 unmapped: 4546560 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:39:50.170846+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83558400 unmapped: 4546560 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:39:51.171031+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83558400 unmapped: 4546560 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:39:52.171163+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83558400 unmapped: 4546560 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:39:53.171285+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83558400 unmapped: 4546560 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:39:54.171412+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83558400 unmapped: 4546560 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:39:55.171567+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83558400 unmapped: 4546560 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:39:56.171727+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 4538368 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:39:57.172244+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 4538368 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:39:58.172736+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 4538368 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:39:59.173045+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 4530176 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:40:00.173432+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 4530176 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:40:01.173613+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 4530176 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:40:02.173746+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 4530176 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:40:03.173915+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 4530176 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:40:04.174067+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 4530176 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:40:05.174168+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 4530176 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:40:06.174405+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 4530176 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:40:07.174702+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 4530176 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:40:08.174872+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 4530176 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:40:09.175074+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 4530176 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:40:10.175279+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 4530176 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:40:11.175416+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 4530176 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:40:12.175573+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 4530176 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:40:13.175726+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 4530176 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:40:14.175986+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 4530176 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:40:15.176467+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 4530176 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:40:16.176784+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 4530176 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:40:17.177019+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 4530176 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:40:18.177183+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 4530176 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:40:19.177322+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 4530176 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:40:20.177577+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 4530176 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:40:21.177755+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 4530176 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:40:22.178006+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 4530176 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:40:23.178275+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 4530176 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:40:24.178421+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 4530176 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:40:25.178583+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 4530176 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:40:26.178762+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 4530176 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:40:27.178916+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 4530176 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:40:28.179082+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 4530176 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:40:29.179287+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 4530176 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:40:30.179726+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 4530176 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:40:31.179949+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 594.422546387s of 600.483703613s, submitted: 333
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 4538368 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:40:32.180125+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 83836928 unmapped: 4268032 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:40:33.180304+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:40:34.180517+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:40:35.180695+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:40:36.180828+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:40:37.180960+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:40:38.181133+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:40:39.181261+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:40:40.181549+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:40:41.181694+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:40:42.181958+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:40:43.182280+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:40:44.182451+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:40:45.182628+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:40:46.182765+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:40:47.183013+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:40:48.183146+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:40:49.183259+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:40:50.183427+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:40:51.183614+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:40:52.183778+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:40:53.183949+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:40:54.184105+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:40:55.184341+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:40:56.184652+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:40:57.184855+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:40:58.185046+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:40:59.185196+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:41:00.185394+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:41:01.185631+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:41:02.185792+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:41:03.185939+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:41:04.186075+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:41:05.187403+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:41:06.187581+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:41:07.187774+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:41:08.187967+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:41:09.188101+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:41:10.188260+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:41:11.188481+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:41:12.188640+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:41:13.188831+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:41:14.188946+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:41:15.189100+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:41:16.189422+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:41:17.189648+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:41:18.189777+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:41:19.189930+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:41:20.190118+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:41:21.190274+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:41:22.190415+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:41:23.190603+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:41:24.190735+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:41:25.190897+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:41:26.191036+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:41:27.191191+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:41:28.191383+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:41:29.191545+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:41:30.191768+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:41:31.191996+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:41:32.192252+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:41:33.192654+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:41:34.192801+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:41:35.192943+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:41:36.193076+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:41:37.193235+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:41:38.193414+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:41:39.193608+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:41:40.193838+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:41:41.194053+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:41:42.194222+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:41:43.194435+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:41:44.194642+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:41:45.194772+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:41:46.194918+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:41:47.195296+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:41:48.195483+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:41:49.195672+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:41:50.195916+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:41:51.196094+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:41:52.196251+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:41:53.196430+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:41:54.196546+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:41:55.196700+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:41:56.196948+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:41:57.197157+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:41:58.197353+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:41:59.197521+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:42:00.197757+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:42:01.197916+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:42:02.198136+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:42:03.198285+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:42:04.198435+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:42:05.198589+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:42:06.198751+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:42:07.198938+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:42:08.199105+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:42:09.199221+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:42:10.199381+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:42:11.199494+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:42:12.199667+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:42:13.199817+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:42:14.199957+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:42:15.200129+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:42:16.200237+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:42:17.200350+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:42:18.200476+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:42:19.200634+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:42:20.200829+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:42:21.200982+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:42:22.201165+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:42:23.201332+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:42:24.201555+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:42:25.201727+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:42:26.201946+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:42:27.202088+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:42:28.202252+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:42:29.202377+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:42:30.202528+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:42:31.203088+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:42:32.203208+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:42:33.203336+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:42:34.203531+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:42:35.203667+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:42:36.203834+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:42:37.203971+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:42:38.204231+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:42:39.204395+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:42:40.204805+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:42:41.204983+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:42:42.205190+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:42:43.205356+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:42:44.205767+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:42:45.206049+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:42:46.206356+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:42:47.206607+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:42:48.206845+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:42:49.207012+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:42:50.207218+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:42:51.207401+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:42:52.207653+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:42:53.207938+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:42:54.208078+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:42:55.208271+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:42:56.208500+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:42:57.208681+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:42:58.208968+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:42:59.209087+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:43:00.209246+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:43:01.209467+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:43:02.209692+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:43:03.209950+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:43:04.210180+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:43:05.210423+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:43:06.210695+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:43:07.210972+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:43:08.211288+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:43:09.211976+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:43:10.212424+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:43:11.212573+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:43:12.212867+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:43:13.213053+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:43:14.213331+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:43:15.213480+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:43:16.213686+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:43:17.214024+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:43:18.214248+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:43:19.214487+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:43:20.214777+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:43:21.214993+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:43:22.215232+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:43:23.215551+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:43:24.215839+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:43:25.215972+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:43:26.216169+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:43:27.216358+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:43:28.216507+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:43:29.216737+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:43:30.216982+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:43:31.217188+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:43:32.217342+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:43:33.217475+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:43:34.217642+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:43:35.217813+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:43:36.218193+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:43:37.218465+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:43:38.218622+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:43:39.218766+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:43:40.218968+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:43:41.219116+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:43:42.219252+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:43:43.219380+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:43:44.219506+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:43:45.219637+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:43:46.219829+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:43:47.220014+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:43:48.220183+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:43:49.220355+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:43:50.220533+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:43:51.220643+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:43:52.220795+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:43:53.220991+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:43:54.221130+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:43:55.221391+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:43:56.221561+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:43:57.221754+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:43:58.221873+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:43:59.222041+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:44:00.222261+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:44:01.222432+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:44:02.222610+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:44:03.222774+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:44:04.222953+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:44:05.223142+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:44:06.223308+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:44:07.223470+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:44:08.223637+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:44:09.223773+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:44:10.223954+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:44:11.224096+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:44:12.224260+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:44:13.224832+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:44:14.232255+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:44:15.232933+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:44:16.233203+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:44:17.233747+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:44:18.234254+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:44:19.234620+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:44:20.235070+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:44:21.235403+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:44:22.235657+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:44:23.235987+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:44:24.236238+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:44:25.236529+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:44:26.236949+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:44:27.237179+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:44:28.237469+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:44:29.237687+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:44:30.237951+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:44:31.238172+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:44:32.238408+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:44:33.238713+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:44:34.238937+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:44:35.239166+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:44:36.239343+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:44:37.239512+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:44:38.239681+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:44:39.239826+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:44:40.240014+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:44:41.240239+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:44:42.240377+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:44:43.240530+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:44:44.240687+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:44:45.240874+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:44:46.241061+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:44:47.241210+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:44:48.241381+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:44:49.241529+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:44:50.241694+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:44:51.241799+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:44:52.241970+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:44:53.242117+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:44:54.242293+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:44:55.242439+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:44:56.242606+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:44:57.242756+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:44:58.242977+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:44:59.243073+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:45:00.243226+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:45:01.243376+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:45:02.243565+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:45:03.243737+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:45:04.243928+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:45:05.244104+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:45:06.244289+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:45:07.244435+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:45:08.244616+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:45:09.244802+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:45:10.245023+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:45:11.245146+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:45:12.245368+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:45:13.245520+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:45:14.245727+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:45:15.245971+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:45:16.246140+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:45:17.246382+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:45:18.246805+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:45:19.247218+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:45:20.247482+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:45:21.247822+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:45:22.247946+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:45:23.248253+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:45:24.248575+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:45:25.248803+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:45:26.249026+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:45:27.249233+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:45:28.249454+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:45:29.249674+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:45:30.249950+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:45:31.250196+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:45:32.250381+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:45:33.250546+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:45:34.250717+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:45:35.250851+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:45:36.250998+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:45:37.251191+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:45:38.251371+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:45:39.251534+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:45:40.251735+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:45:41.251970+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:45:42.252118+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:45:43.252229+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:45:44.252435+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:45:45.252613+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:45:46.252796+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:45:47.252948+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:45:48.253082+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:45:49.253234+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:45:50.253417+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:45:51.253547+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:45:52.253721+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:45:53.253967+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:45:54.254183+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:45:55.254331+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 4063232 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:45:56.255187+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84049920 unmapped: 4055040 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:45:57.255617+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84049920 unmapped: 4055040 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:45:58.256072+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84049920 unmapped: 4055040 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:45:59.256269+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84049920 unmapped: 4055040 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:46:00.256562+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84049920 unmapped: 4055040 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:46:01.256711+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84049920 unmapped: 4055040 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:46:02.256852+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84049920 unmapped: 4055040 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:46:03.257070+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84049920 unmapped: 4055040 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:46:04.257275+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84049920 unmapped: 4055040 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:46:05.257472+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84049920 unmapped: 4055040 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:46:06.257648+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84049920 unmapped: 4055040 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:46:07.257788+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84049920 unmapped: 4055040 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:46:08.257954+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84049920 unmapped: 4055040 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:46:09.258179+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84049920 unmapped: 4055040 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:46:10.258373+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84049920 unmapped: 4055040 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:46:11.258544+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84049920 unmapped: 4055040 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:46:12.258710+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84049920 unmapped: 4055040 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:46:13.258935+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84049920 unmapped: 4055040 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:46:14.259106+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84049920 unmapped: 4055040 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:46:15.259273+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84049920 unmapped: 4055040 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:46:16.259471+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84049920 unmapped: 4055040 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:46:17.259638+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84049920 unmapped: 4055040 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:46:18.259807+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84049920 unmapped: 4055040 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:46:19.260049+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 4046848 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:46:20.260302+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 4046848 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:46:21.260492+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 4046848 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:46:22.260685+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 4046848 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:46:23.261232+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 4046848 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:46:24.261525+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 4046848 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:46:25.261719+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 4046848 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:46:26.261969+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 4046848 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:46:27.263502+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 4046848 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:46:28.263695+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 4046848 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:46:29.265015+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 4046848 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:46:30.265278+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 4046848 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:46:31.265801+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 4046848 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:46:32.266211+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 4046848 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:46:33.266575+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 4046848 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:46:34.267017+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 4046848 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:46:35.267228+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 4046848 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:46:36.267585+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 4046848 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:46:37.267842+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 4046848 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:46:38.267964+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 4046848 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:46:39.268125+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 4046848 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:46:40.268382+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 4046848 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:46:41.268518+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 4046848 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:46:42.268719+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 4046848 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:46:43.268979+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 4046848 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:46:44.269119+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 4046848 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:46:45.269255+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 4046848 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:46:46.269449+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 4046848 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:46:47.269650+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 4046848 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:46:48.269796+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 4046848 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:46:49.269994+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:46:50.270251+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:46:51.270423+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:46:52.270612+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:46:53.270763+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:46:54.270948+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:46:55.271074+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:46:56.271202+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:46:57.271369+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:46:58.271549+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:46:59.271703+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:47:00.271836+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:47:01.271982+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:47:02.272142+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:47:03.272441+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:47:04.272760+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:47:05.272964+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:47:06.273083+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:47:07.273215+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:47:08.273339+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:47:09.273482+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:47:10.273707+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:47:11.273868+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:47:12.274076+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:47:13.274171+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:47:14.274332+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:47:15.274501+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:47:16.274681+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:47:17.274827+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:47:18.275055+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:47:19.275192+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:47:20.275349+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:47:21.275503+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:47:22.275695+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:47:23.275855+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:47:24.276093+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:47:25.276206+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:47:26.276348+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:47:27.276558+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:47:28.276751+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:47:29.277824+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:47:30.278064+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:47:31.279980+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:47:32.281553+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:47:33.282431+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:47:34.283779+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:47:35.284253+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:47:36.285077+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:47:37.285542+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:47:38.285737+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:47:39.286186+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:47:40.286523+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:47:41.286958+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:47:42.287241+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:47:43.287470+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:47:44.287792+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:47:45.287945+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:47:46.288113+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:47:47.288302+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:47:48.288549+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:47:49.288806+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:47:50.288991+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:47:51.289138+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:47:52.289371+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:47:53.289564+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:47:54.289828+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:47:55.289994+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:47:56.290213+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:47:57.290478+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:47:58.290699+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:47:59.290948+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:48:00.291131+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:48:01.291340+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:48:02.291564+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:48:03.291693+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:48:04.291980+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:48:05.292119+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.8 total, 600.0 interval
                                           Cumulative writes: 9194 writes, 35K keys, 9194 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 9194 writes, 2074 syncs, 4.43 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 682 writes, 1062 keys, 682 commit groups, 1.0 writes per commit group, ingest: 0.34 MB, 0.00 MB/s
                                           Interval WAL: 682 writes, 328 syncs, 2.08 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:48:06.292289+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:48:07.292445+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:48:08.292570+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:48:09.292720+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:48:10.292954+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:48:11.293221+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:48:12.293454+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:48:13.293601+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:48:14.293720+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:48:15.293830+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:48:16.294013+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:48:17.294162+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:48:18.294312+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:48:19.294448+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:48:20.300136+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-mgr[74948]: [devicehealth INFO root] Check health
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:48:21.300299+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:48:22.300491+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 ms_handle_reset con 0x5633f3a4e800 session 0x5633f43e14a0
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: handle_auth_request added challenge on 0x5633f3a4ec00
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:48:23.300691+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:48:24.300845+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 4038656 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:48:25.301004+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 4030464 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:48:26.301122+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 4030464 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:48:27.301267+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 4030464 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:48:28.301444+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 4030464 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:48:29.301632+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 4030464 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:48:30.301848+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 4030464 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:48:31.302034+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 4030464 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:48:32.302195+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 4030464 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:48:33.302353+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 4030464 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:48:34.302529+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 4030464 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:48:35.302724+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 4030464 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:48:36.302961+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 4030464 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:48:37.303140+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 4030464 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:48:38.303310+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 4030464 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:48:39.303406+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 4030464 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:48:40.303595+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 4030464 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:48:41.303740+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 4030464 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:48:42.303945+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:48:43.304066+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 4030464 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:48:44.304217+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 4030464 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:48:45.304449+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 4030464 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:48:46.304610+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 4030464 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:48:47.304742+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 4030464 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:48:48.304968+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 4030464 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:48:49.305168+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 4030464 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:48:50.305390+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 4030464 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:48:51.305549+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 4030464 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:48:52.305733+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 4030464 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:48:53.305938+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 4030464 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:48:54.306095+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 4030464 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:48:55.306258+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 4030464 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:48:56.306437+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 4022272 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:48:57.306602+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 4022272 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:48:58.306736+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 4022272 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:48:59.306972+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 4022272 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:49:00.307160+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 4022272 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:49:01.307288+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 4022272 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:49:02.307441+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 4022272 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:49:03.307614+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 4022272 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:49:04.307789+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84090880 unmapped: 4014080 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:49:05.307943+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84090880 unmapped: 4014080 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:49:06.308129+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84090880 unmapped: 4014080 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:49:07.308324+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84090880 unmapped: 4014080 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:49:08.308506+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84090880 unmapped: 4014080 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:49:09.308664+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84090880 unmapped: 4014080 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:49:10.308872+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84090880 unmapped: 4014080 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:49:11.309139+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84090880 unmapped: 4014080 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:49:12.309368+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84090880 unmapped: 4014080 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:49:13.309561+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84090880 unmapped: 4014080 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:49:14.309822+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84090880 unmapped: 4014080 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:49:15.309971+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84090880 unmapped: 4014080 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:49:16.310143+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84090880 unmapped: 4014080 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:49:17.310354+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84090880 unmapped: 4014080 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:49:18.310548+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84090880 unmapped: 4014080 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:49:19.310758+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84090880 unmapped: 4014080 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:49:20.311006+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84090880 unmapped: 4014080 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:49:21.311219+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84090880 unmapped: 4014080 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:49:22.311455+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84090880 unmapped: 4014080 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:49:23.311692+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84090880 unmapped: 4014080 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:49:24.311847+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84090880 unmapped: 4014080 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:49:25.312054+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84090880 unmapped: 4014080 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:49:26.312261+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84090880 unmapped: 4014080 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:49:27.312428+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84090880 unmapped: 4014080 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:49:28.312672+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84090880 unmapped: 4014080 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:49:29.312849+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84090880 unmapped: 4014080 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:49:30.313102+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 4005888 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:49:31.313243+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 4005888 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:49:32.313464+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 4005888 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:49:33.313625+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 4005888 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:49:34.313810+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 4005888 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:49:35.313996+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 4005888 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:49:36.314264+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 4005888 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:49:37.314458+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 4005888 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:49:38.314617+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 4005888 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:49:39.314760+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 4005888 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:49:40.314941+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 4005888 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:49:41.315107+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 4005888 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:49:42.315264+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 4005888 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:49:43.315487+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 4005888 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:49:44.315684+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 4005888 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:49:45.315865+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 4005888 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:49:46.316096+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 4005888 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:49:47.316243+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 4005888 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:49:48.316396+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 4005888 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:49:49.316533+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 4005888 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:49:50.316646+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 4005888 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:49:51.316796+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 4005888 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:49:52.316949+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 4005888 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:49:53.317097+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 4005888 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:49:54.317243+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 4005888 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:49:55.317398+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 4005888 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:49:56.317611+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 4005888 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:49:57.317813+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 4005888 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:49:58.318012+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 4005888 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:49:59.318168+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 4005888 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:50:00.318385+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 4005888 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:50:01.318541+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84107264 unmapped: 3997696 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:50:02.318691+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84107264 unmapped: 3997696 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:50:03.318903+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84107264 unmapped: 3997696 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:50:04.319055+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84107264 unmapped: 3997696 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:50:05.319212+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84107264 unmapped: 3997696 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:50:06.319411+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84107264 unmapped: 3997696 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:50:07.319604+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84107264 unmapped: 3997696 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:50:08.319844+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84107264 unmapped: 3997696 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:50:09.320066+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84107264 unmapped: 3997696 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:50:10.320248+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84107264 unmapped: 3997696 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:50:11.320426+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84107264 unmapped: 3997696 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:50:12.320644+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84107264 unmapped: 3997696 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:50:13.320828+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84107264 unmapped: 3997696 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:50:14.320991+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84107264 unmapped: 3997696 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:50:15.321853+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84107264 unmapped: 3997696 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:50:16.322063+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84107264 unmapped: 3997696 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:50:17.322244+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84107264 unmapped: 3997696 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:50:18.322449+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84107264 unmapped: 3997696 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:50:19.322613+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84107264 unmapped: 3997696 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:50:20.322799+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84107264 unmapped: 3997696 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:50:21.322928+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84107264 unmapped: 3997696 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:50:22.323115+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84107264 unmapped: 3997696 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:50:23.323265+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84107264 unmapped: 3997696 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:50:24.323447+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84107264 unmapped: 3997696 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:50:25.323610+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84107264 unmapped: 3997696 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:50:26.323764+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84107264 unmapped: 3997696 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:50:27.323967+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84107264 unmapped: 3997696 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:50:28.324194+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84107264 unmapped: 3997696 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:50:29.324335+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84107264 unmapped: 3997696 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:50:30.324580+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 ms_handle_reset con 0x5633f3515800 session 0x5633f1da8d20
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: handle_auth_request added challenge on 0x5633f3a4f000
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84107264 unmapped: 3997696 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:50:31.324713+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 599.286376953s of 600.261535645s, submitted: 354
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84123648 unmapped: 3981312 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:50:32.324961+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84123648 unmapped: 3981312 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:50:33.328240+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84123648 unmapped: 3981312 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:50:34.328377+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84287488 unmapped: 3817472 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:50:35.328518+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 3588096 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:50:36.328712+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 3538944 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:50:37.328896+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [0,0,0,1,0,1])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 3530752 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:50:38.329103+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 3530752 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:50:39.329278+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 3530752 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:50:40.329458+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 3530752 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:50:41.330656+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 3530752 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:50:42.331477+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 3530752 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:50:43.331842+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 3530752 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:50:44.332349+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 3530752 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:50:45.332630+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 3530752 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:50:46.332989+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 3530752 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:50:47.333337+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 3530752 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:50:48.333699+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 3530752 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:50:49.333895+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 3530752 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:50:50.334252+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 3530752 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:50:51.334491+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 3530752 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:50:52.334800+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 3530752 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:50:53.334937+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84582400 unmapped: 3522560 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:50:54.335053+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84582400 unmapped: 3522560 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:50:55.335218+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84582400 unmapped: 3522560 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:50:56.335417+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84590592 unmapped: 3514368 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:50:57.335554+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84590592 unmapped: 3514368 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:50:58.335780+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84590592 unmapped: 3514368 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:50:59.337444+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84590592 unmapped: 3514368 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:51:00.337660+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84590592 unmapped: 3514368 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:51:01.337795+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84590592 unmapped: 3514368 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:51:02.337958+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84590592 unmapped: 3514368 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:51:03.338121+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84590592 unmapped: 3514368 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:51:04.338276+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84590592 unmapped: 3514368 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:51:05.338414+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84590592 unmapped: 3514368 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:51:06.338581+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84590592 unmapped: 3514368 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:51:07.338940+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84590592 unmapped: 3514368 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:51:08.339219+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84590592 unmapped: 3514368 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:51:09.339640+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84590592 unmapped: 3514368 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:51:10.339963+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84590592 unmapped: 3514368 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:51:11.340199+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84590592 unmapped: 3514368 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:51:12.340416+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84590592 unmapped: 3514368 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:51:13.340679+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84590592 unmapped: 3514368 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:51:14.340975+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84590592 unmapped: 3514368 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:51:15.341834+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84590592 unmapped: 3514368 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:51:16.342115+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84590592 unmapped: 3514368 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:51:17.342416+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84590592 unmapped: 3514368 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:51:18.342688+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84590592 unmapped: 3514368 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:51:19.342974+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84598784 unmapped: 3506176 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:51:20.343286+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84598784 unmapped: 3506176 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:51:21.343558+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84598784 unmapped: 3506176 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:51:22.343798+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84598784 unmapped: 3506176 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:51:23.344026+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84598784 unmapped: 3506176 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:51:24.344281+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84598784 unmapped: 3506176 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:51:25.344536+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84598784 unmapped: 3506176 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:51:26.344787+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84598784 unmapped: 3506176 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:51:27.344984+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84598784 unmapped: 3506176 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:51:28.345177+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84598784 unmapped: 3506176 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:51:29.345407+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84598784 unmapped: 3506176 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:51:30.345661+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84598784 unmapped: 3506176 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:51:31.345923+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84598784 unmapped: 3506176 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:51:32.346091+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84598784 unmapped: 3506176 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:51:33.346310+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84598784 unmapped: 3506176 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:51:34.346560+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 3497984 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:51:35.346798+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:51:36.347036+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:51:37.347303+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:51:38.347625+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:51:39.347984+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:51:40.348255+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:51:41.348478+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:51:42.348693+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:51:43.348870+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:51:44.349115+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:51:45.349402+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:51:46.349691+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:51:47.349929+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:51:48.350098+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:51:49.350348+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:51:50.350605+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:51:51.350981+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:51:52.351178+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:51:53.351465+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:51:54.351719+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:51:55.351999+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:51:56.352140+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:51:57.352317+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:51:58.352484+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:51:59.352695+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:52:00.352976+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:52:01.353166+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:52:02.353393+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:52:03.353605+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:52:04.353931+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:52:05.354210+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:52:06.354400+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:52:07.354563+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:52:08.354756+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:52:09.354930+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:52:10.355124+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:52:11.355358+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:52:12.355542+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:52:13.355776+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:52:14.355944+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:52:15.356096+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:52:16.356237+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:52:17.356400+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:52:18.356670+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:52:19.356872+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:52:20.357130+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:52:21.357314+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:52:22.357552+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:52:23.357703+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:52:24.357943+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:52:25.358140+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:52:26.358355+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:52:27.358491+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:52:28.358706+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:52:29.359019+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:52:30.359313+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:52:31.359496+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:52:32.359665+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:52:33.359934+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:52:34.360211+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:52:35.360391+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:52:36.360603+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:52:37.360962+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:52:38.361145+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:52:39.361321+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:52:40.361638+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:52:41.361833+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:52:42.362075+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:52:43.362286+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:52:44.362429+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:52:45.362694+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:52:46.362836+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:52:47.363027+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:52:48.363197+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:52:49.363373+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:52:50.363545+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:52:51.363678+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:52:52.363819+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:52:53.363978+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:52:54.364181+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:52:55.364529+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:52:56.364719+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:52:57.364958+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:52:58.365164+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:52:59.365340+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:53:00.365621+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:53:01.365774+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:53:02.366029+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:53:03.366231+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:53:04.366447+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:53:05.366637+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:53:06.366810+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:53:07.366993+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:53:08.367208+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:53:09.367373+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:53:10.367586+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:53:11.367776+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:53:12.367974+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:53:13.368172+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:53:14.368373+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:53:15.368502+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:53:16.368728+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:53:17.368900+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:53:18.369232+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:53:19.369450+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:53:20.369739+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:53:21.369956+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:53:22.370296+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:53:23.370525+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:53:24.370700+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84631552 unmapped: 3473408 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:53:25.370848+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84631552 unmapped: 3473408 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:53:26.371012+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84631552 unmapped: 3473408 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:53:27.371179+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84631552 unmapped: 3473408 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:53:28.371387+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84631552 unmapped: 3473408 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:53:29.371557+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84631552 unmapped: 3473408 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:53:30.371778+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84631552 unmapped: 3473408 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:53:31.371952+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:53:32.372187+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:53:33.372364+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3489792 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:53:34.372533+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:53:35.372675+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:53:36.372872+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:53:37.373061+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:53:38.373345+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:53:39.373489+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:53:40.373750+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:53:41.373899+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:53:42.374087+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:53:43.374222+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:53:44.374423+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:53:45.374561+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:53:46.374742+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:53:47.374922+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:53:48.375142+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:53:49.375394+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:53:50.375619+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:53:51.375827+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:53:52.375999+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:53:53.376196+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:53:54.376433+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:53:55.376623+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:53:56.376832+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:53:57.377018+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:53:58.377154+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:53:59.377364+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:54:00.377583+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:54:01.377819+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:54:02.378008+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:54:03.378202+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:54:04.378360+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:54:05.378524+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:54:06.378745+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:54:07.378941+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:54:08.379143+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:54:09.379326+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:54:10.379521+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:54:11.379716+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:54:12.379929+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:54:13.380142+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:54:14.380336+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:54:15.380533+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:54:16.380722+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:54:17.380976+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:54:18.381211+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:54:19.381414+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:54:20.381623+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:54:21.381804+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:54:22.382051+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:54:23.382229+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:54:24.382369+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:54:25.382557+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:54:26.382857+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:54:27.383109+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:54:28.383318+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:54:29.383624+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:54:30.383993+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:54:31.384240+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:54:32.384434+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:54:33.384662+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:54:34.384827+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:54:35.385162+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:54:36.385535+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:54:37.385757+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:54:38.402307+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:54:39.402554+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:54:40.403041+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:54:41.403333+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:54:42.403501+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:54:43.403648+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:54:44.403842+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:54:45.403993+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:54:46.404172+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:54:47.404380+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:54:48.404523+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:54:49.404673+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:54:50.404861+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:54:51.405020+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:54:52.405148+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:54:53.405325+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:54:54.405534+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:54:55.405725+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:54:56.405919+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:54:57.406068+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:54:58.406211+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:54:59.406325+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:55:00.406515+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:55:01.406830+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:55:02.407006+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:55:03.407173+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:55:04.407390+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:55:05.407585+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:55:06.407854+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:55:07.408079+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:55:08.408312+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:55:09.408491+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:55:10.408730+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:55:11.408951+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:55:12.409194+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:55:13.409425+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:55:14.409645+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:55:15.410017+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:55:16.410229+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:55:17.410367+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:55:18.410615+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:55:19.410836+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:55:20.411262+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:55:21.411582+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:55:22.411855+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:55:23.412213+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:55:24.412381+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:55:25.412563+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:55:26.412806+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:55:27.413076+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:55:28.413312+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:55:29.413801+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:55:30.414124+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:55:31.414350+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:55:32.414531+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:55:33.414756+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:55:34.414940+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:55:35.415152+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:55:36.415297+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:55:37.415433+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:55:38.415583+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:55:39.415786+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:55:40.415998+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:55:41.416219+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:55:42.416449+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:55:43.416649+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:55:44.416836+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:55:45.417032+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:55:46.417271+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:55:47.417437+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:55:48.417659+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:55:49.417839+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:55:50.418038+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:55:51.418415+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:55:52.418680+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:55:53.418918+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:55:54.419069+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:55:55.419288+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:55:56.419450+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:55:57.419603+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:55:58.419747+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:55:59.419904+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:56:00.420111+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:56:01.420273+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:56:02.420474+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:56:03.420642+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:56:04.420815+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:56:05.420981+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:56:06.421174+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:56:07.421341+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:56:08.421512+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:56:09.421677+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:56:10.421966+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3481600 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:56:11.422150+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84631552 unmapped: 3473408 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:56:12.424587+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84631552 unmapped: 3473408 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:56:13.424784+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84631552 unmapped: 3473408 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:56:14.424922+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84631552 unmapped: 3473408 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:56:15.425253+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84631552 unmapped: 3473408 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:56:16.425374+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84631552 unmapped: 3473408 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:56:17.425542+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84631552 unmapped: 3473408 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:56:18.425707+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84631552 unmapped: 3473408 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:56:19.425925+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84631552 unmapped: 3473408 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:56:20.426118+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84631552 unmapped: 3473408 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:56:21.426289+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84647936 unmapped: 3457024 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:56:22.426513+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84647936 unmapped: 3457024 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:56:23.426724+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84647936 unmapped: 3457024 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:56:24.426927+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84647936 unmapped: 3457024 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:56:25.427197+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84647936 unmapped: 3457024 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:56:26.427447+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84647936 unmapped: 3457024 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:56:27.427713+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84647936 unmapped: 3457024 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:56:28.427910+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84647936 unmapped: 3457024 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:56:29.428072+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84647936 unmapped: 3457024 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:56:30.428381+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84647936 unmapped: 3457024 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:56:31.428589+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84647936 unmapped: 3457024 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:56:32.428757+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84647936 unmapped: 3457024 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:56:33.428958+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84647936 unmapped: 3457024 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:56:34.429114+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84647936 unmapped: 3457024 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:56:35.429324+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84647936 unmapped: 3457024 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:56:36.429472+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84647936 unmapped: 3457024 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:56:37.429608+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84647936 unmapped: 3457024 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:56:38.429781+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84647936 unmapped: 3457024 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:56:39.429951+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84647936 unmapped: 3457024 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:56:40.430165+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84647936 unmapped: 3457024 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:56:41.430356+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84664320 unmapped: 3440640 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:56:42.430517+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84664320 unmapped: 3440640 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:56:43.430675+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84664320 unmapped: 3440640 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:56:44.430839+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84664320 unmapped: 3440640 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:56:45.431012+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84664320 unmapped: 3440640 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:56:46.431200+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84664320 unmapped: 3440640 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:56:47.431478+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84672512 unmapped: 3432448 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:56:48.431626+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84672512 unmapped: 3432448 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:56:49.431859+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84672512 unmapped: 3432448 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:56:50.432189+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84672512 unmapped: 3432448 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:56:51.432410+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84672512 unmapped: 3432448 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:56:52.432699+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84672512 unmapped: 3432448 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:56:53.432966+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84672512 unmapped: 3432448 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:56:54.433102+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84672512 unmapped: 3432448 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:56:55.433261+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84672512 unmapped: 3432448 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:56:56.433425+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84672512 unmapped: 3432448 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:56:57.433686+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84672512 unmapped: 3432448 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:56:58.433900+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84672512 unmapped: 3432448 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:56:59.434096+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84672512 unmapped: 3432448 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:57:00.434315+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84672512 unmapped: 3432448 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:57:01.434489+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84672512 unmapped: 3432448 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:57:02.434643+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84672512 unmapped: 3432448 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:57:03.434777+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84672512 unmapped: 3432448 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:57:04.434944+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84672512 unmapped: 3432448 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:57:05.435088+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84672512 unmapped: 3432448 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:57:06.435362+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84672512 unmapped: 3432448 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:57:07.435547+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84672512 unmapped: 3432448 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:57:08.437739+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84672512 unmapped: 3432448 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:57:09.438992+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84672512 unmapped: 3432448 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:57:10.439961+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84672512 unmapped: 3432448 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:57:11.440649+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84672512 unmapped: 3432448 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:57:12.441384+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84672512 unmapped: 3432448 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:57:13.441745+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84672512 unmapped: 3432448 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:57:14.442119+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84672512 unmapped: 3432448 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:57:15.442710+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84672512 unmapped: 3432448 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:57:16.443033+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84672512 unmapped: 3432448 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:57:17.443356+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84672512 unmapped: 3432448 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:57:18.443667+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84672512 unmapped: 3432448 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:57:19.443867+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84672512 unmapped: 3432448 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:57:20.444043+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84672512 unmapped: 3432448 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:57:21.444282+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84672512 unmapped: 3432448 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:57:22.444468+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84672512 unmapped: 3432448 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:57:23.444752+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84672512 unmapped: 3432448 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:57:24.445030+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84672512 unmapped: 3432448 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:57:25.445209+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84672512 unmapped: 3432448 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:57:26.445449+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84672512 unmapped: 3432448 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:57:27.445593+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84672512 unmapped: 3432448 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:57:28.445779+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84672512 unmapped: 3432448 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:57:29.446023+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84672512 unmapped: 3432448 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:57:30.446209+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84672512 unmapped: 3432448 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:57:31.446357+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84672512 unmapped: 3432448 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:57:32.446520+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84672512 unmapped: 3432448 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:57:33.446725+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84672512 unmapped: 3432448 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:57:34.446900+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 3424256 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:57:35.447091+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 3424256 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:57:36.447311+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 3424256 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:57:37.447511+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 3424256 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:57:38.447768+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 3424256 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:57:39.448047+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 3424256 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:57:40.448307+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 3424256 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:57:41.448525+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 3424256 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:57:42.448698+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 3424256 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:57:43.448949+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 3424256 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:57:44.449125+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 3424256 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:57:45.449291+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 3424256 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:57:46.449477+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 3424256 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:57:47.449690+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 3424256 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:57:48.449915+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 3424256 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:57:49.450159+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 3424256 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:57:50.450411+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 3424256 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:57:51.450602+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 3424256 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:57:52.450800+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 3424256 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:57:53.451051+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 3424256 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:57:54.451191+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 3424256 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:57:55.451478+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 3424256 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:57:56.451748+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 3424256 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:57:57.451986+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 3424256 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:57:58.452115+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 3424256 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:57:59.452286+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 3424256 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:58:00.452546+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 3424256 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:58:01.452724+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 3424256 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:58:02.453004+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 3424256 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:58:03.453242+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 3424256 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:58:04.453453+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 3424256 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:58:05.453671+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.8 total, 600.0 interval
                                           Cumulative writes: 9755 writes, 36K keys, 9755 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 9755 writes, 2327 syncs, 4.19 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 561 writes, 878 keys, 561 commit groups, 1.0 writes per commit group, ingest: 0.27 MB, 0.00 MB/s
                                           Interval WAL: 561 writes, 253 syncs, 2.22 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 3424256 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:58:06.453823+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 3424256 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:58:07.454013+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 3424256 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:58:08.454234+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 3424256 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:58:09.454428+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 3424256 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:58:10.454616+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 3424256 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:58:11.454820+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 3424256 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:58:12.454992+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 3424256 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:58:13.455208+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 3424256 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:58:14.455383+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 3424256 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:58:15.455537+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84688896 unmapped: 3416064 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:58:16.455685+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84688896 unmapped: 3416064 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:58:17.455830+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84688896 unmapped: 3416064 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:58:18.456022+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84688896 unmapped: 3416064 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:58:19.456170+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: osd.1 139 heartbeat osd_stat(store_statfs(0x1bc66c000/0x0/0x1bfc00000, data 0xd6227/0x1a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 84844544 unmapped: 3260416 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: do_command 'config diff' '{prefix=config diff}'
Nov 29 06:58:53 compute-0 ceph-osd[85162]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:58:20.456317+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: do_command 'config show' '{prefix=config show}'
Nov 29 06:58:53 compute-0 ceph-osd[85162]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 29 06:58:53 compute-0 ceph-osd[85162]: do_command 'counter dump' '{prefix=counter dump}'
Nov 29 06:58:53 compute-0 ceph-osd[85162]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 29 06:58:53 compute-0 ceph-osd[85162]: do_command 'counter schema' '{prefix=counter schema}'
Nov 29 06:58:53 compute-0 ceph-osd[85162]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 2826240 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 06:58:53 compute-0 ceph-osd[85162]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 06:58:53 compute-0 ceph-osd[85162]: bluestore.MempoolThread(0x5633efc4bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917256 data_alloc: 218103808 data_used: 299008
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:58:21.456467+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: prioritycache tune_memory target: 4294967296 mapped: 85311488 unmapped: 2793472 heap: 88104960 old mem: 2845415832 new mem: 2845415832
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: tick
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_tickets
Nov 29 06:58:53 compute-0 ceph-osd[85162]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T06:58:22.456638+0000)
Nov 29 06:58:53 compute-0 ceph-osd[85162]: do_command 'log dump' '{prefix=log dump}'
Nov 29 06:58:53 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1393: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:58:53 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:58:53 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:58:53 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:58:53.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:58:53 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:58:53 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:58:53 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:58:53.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:58:53 compute-0 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.15078 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 06:58:53 compute-0 ceph-mon[74654]: from='client.? 192.168.122.102:0/258398947' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 29 06:58:53 compute-0 ceph-mon[74654]: from='client.? 192.168.122.101:0/532443902' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Nov 29 06:58:53 compute-0 ceph-mon[74654]: from='client.? 192.168.122.101:0/4111572753' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 29 06:58:53 compute-0 ceph-mon[74654]: from='client.? 192.168.122.102:0/2584480393' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Nov 29 06:58:53 compute-0 ceph-mon[74654]: from='client.? 192.168.122.100:0/3758656299' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 06:58:53 compute-0 ceph-mon[74654]: from='client.? 192.168.122.100:0/844408131' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Nov 29 06:58:53 compute-0 ceph-mon[74654]: from='client.24970 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 06:58:53 compute-0 ceph-mon[74654]: from='client.24893 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 06:58:53 compute-0 ceph-mon[74654]: from='client.? 192.168.122.100:0/1662226395' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 29 06:58:53 compute-0 ceph-mon[74654]: from='client.? 192.168.122.100:0/4145305371' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Nov 29 06:58:53 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 06:58:53 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Nov 29 06:58:53 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2594614874' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 29 06:58:53 compute-0 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.15093 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 06:58:54 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 29 06:58:54 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3783031133' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 29 06:58:54 compute-0 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.24944 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 06:58:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:58:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:58:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:58:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:58:54 compute-0 ceph-mgr[74948]: [balancer INFO root] Optimize plan auto_2025-11-29_06:58:54
Nov 29 06:58:54 compute-0 ceph-mgr[74948]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 06:58:54 compute-0 ceph-mgr[74948]: [balancer INFO root] do_upmap
Nov 29 06:58:54 compute-0 ceph-mgr[74948]: [balancer INFO root] pools ['volumes', '.mgr', 'cephfs.cephfs.data', 'images', 'default.rgw.log', 'cephfs.cephfs.meta', 'default.rgw.meta', 'vms', '.rgw.root', 'backups', 'default.rgw.control']
Nov 29 06:58:54 compute-0 ceph-mgr[74948]: [balancer INFO root] prepared 0/10 changes
Nov 29 06:58:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 06:58:54 compute-0 ceph-mgr[74948]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 06:58:54 compute-0 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.25021 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 06:58:54 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0) v1
Nov 29 06:58:54 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1423480369' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 29 06:58:54 compute-0 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.24950 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 06:58:54 compute-0 crontab[271576]: (root) LIST (root)
Nov 29 06:58:54 compute-0 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.25033 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 06:58:54 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Nov 29 06:58:54 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/634521314' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 29 06:58:55 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:58:55 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
Nov 29 06:58:55 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4117577449' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Nov 29 06:58:55 compute-0 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.24962 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 06:58:55 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1394: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:58:55 compute-0 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.25039 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 06:58:55 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:58:55 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:58:55 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:58:55.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:58:55 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:58:55 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:58:55 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:58:55.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:58:55 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Nov 29 06:58:55 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/363659733' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 29 06:58:55 compute-0 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.24968 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 29 06:58:55 compute-0 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.25057 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 29 06:58:55 compute-0 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.15147 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 06:58:55 compute-0 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: 2025-11-29T06:58:55.740+0000 7f90f1cf5640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 29 06:58:55 compute-0 ceph-mgr[74948]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 29 06:58:55 compute-0 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.15153 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 06:58:56 compute-0 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.24983 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 06:58:56 compute-0 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.25066 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 06:58:56 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0) v1
Nov 29 06:58:56 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4186939961' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Nov 29 06:58:56 compute-0 podman[271735]: 2025-11-29 06:58:56.215116218 +0000 UTC m=+0.176993595 container health_status 843911ed0b6203707f0633a7e737420fbf54d55170a2d9cdc86db1752ff76af8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, container_name=multipathd)
Nov 29 06:58:56 compute-0 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.15168 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 06:58:56 compute-0 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.24989 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 29 06:58:56 compute-0 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.25075 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 29 06:58:56 compute-0 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.25081 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 06:58:56 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0) v1
Nov 29 06:58:56 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3606582697' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Nov 29 06:58:56 compute-0 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.25087 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 06:58:56 compute-0 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.24998 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 06:58:57 compute-0 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.15186 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 29 06:58:57 compute-0 ceph-mon[74654]: from='client.? 192.168.122.102:0/3995147939' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 29 06:58:57 compute-0 ceph-mon[74654]: from='client.? 192.168.122.101:0/2877614875' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 29 06:58:57 compute-0 ceph-mon[74654]: from='client.? 192.168.122.101:0/2517383288' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Nov 29 06:58:57 compute-0 ceph-mon[74654]: from='client.? 192.168.122.102:0/1377123572' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Nov 29 06:58:57 compute-0 ceph-mon[74654]: from='client.? 192.168.122.100:0/3003957364' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 29 06:58:57 compute-0 ceph-mon[74654]: from='client.? 192.168.122.100:0/1120183002' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Nov 29 06:58:57 compute-0 ceph-mon[74654]: pgmap v1392: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:58:57 compute-0 ceph-mon[74654]: pgmap v1393: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:58:57 compute-0 ceph-mon[74654]: from='client.? 192.168.122.102:0/1383921042' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 29 06:58:57 compute-0 ceph-mon[74654]: from='client.? 192.168.122.102:0/2433791124' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Nov 29 06:58:57 compute-0 ceph-mon[74654]: from='client.? 192.168.122.100:0/2594614874' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 29 06:58:57 compute-0 ceph-mon[74654]: from='client.? 192.168.122.101:0/1233692885' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 29 06:58:57 compute-0 ceph-mon[74654]: from='client.? 192.168.122.101:0/3946325939' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Nov 29 06:58:57 compute-0 ceph-mon[74654]: from='client.? 192.168.122.102:0/3661981742' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 29 06:58:57 compute-0 ceph-mon[74654]: from='client.? 192.168.122.100:0/3783031133' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 29 06:58:57 compute-0 sudo[271865]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:58:57 compute-0 sudo[271865]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:58:57 compute-0 sudo[271865]: pam_unix(sudo:session): session closed for user root
Nov 29 06:58:57 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1395: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:58:57 compute-0 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.25096 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 06:58:57 compute-0 sudo[271894]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 06:58:57 compute-0 sudo[271894]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:58:57 compute-0 sudo[271894]: pam_unix(sudo:session): session closed for user root
Nov 29 06:58:57 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:58:57 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:58:57 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:58:57.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:58:57 compute-0 sudo[271929]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:58:57 compute-0 sudo[271929]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:58:57 compute-0 sudo[271929]: pam_unix(sudo:session): session closed for user root
Nov 29 06:58:57 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:58:57 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:58:57 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:58:57.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:58:57 compute-0 sudo[271954]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/336ec58c-893b-528f-a0c1-6ed1196bc047/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 29 06:58:57 compute-0 sudo[271954]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:58:57 compute-0 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.25010 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 06:58:57 compute-0 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.25105 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 06:58:57 compute-0 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.25016 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 06:58:58 compute-0 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.25117 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 06:58:58 compute-0 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: 2025-11-29T06:58:58.340+0000 7f90f1cf5640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 29 06:58:58 compute-0 ceph-mgr[74948]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 29 06:58:58 compute-0 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.25028 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 06:58:58 compute-0 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: 2025-11-29T06:58:58.669+0000 7f90f1cf5640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 29 06:58:58 compute-0 ceph-mgr[74948]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 29 06:58:59 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Nov 29 06:58:59 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1523353195' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 29 06:58:59 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 06:58:59 compute-0 systemd[1]: Starting Hostname Service...
Nov 29 06:58:59 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1396: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:58:59 compute-0 systemd[1]: Started Hostname Service.
Nov 29 06:58:59 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:58:59 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:58:59 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:58:59.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:58:59 compute-0 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.15207 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 06:58:59 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:58:59 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:58:59 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:58:59.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:58:59 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Nov 29 06:58:59 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1840177804' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 29 06:58:59 compute-0 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.15219 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 29 06:58:59 compute-0 podman[272069]: 2025-11-29 06:58:59.824154929 +0000 UTC m=+1.778225016 container exec c3c8680245c67f710ba1b448e2d4c77c4c02bc368d31276f0332ad942957e3cf (image=quay.io/ceph/ceph:v18, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mon-compute-0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 06:58:59 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Nov 29 06:58:59 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/95145201' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 29 06:59:00 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:59:00 compute-0 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.15231 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 06:59:00 compute-0 sudo[272326]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:59:00 compute-0 sudo[272326]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:59:00 compute-0 sudo[272326]: pam_unix(sudo:session): session closed for user root
Nov 29 06:59:00 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 29 06:59:00 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2283979250' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 29 06:59:00 compute-0 sudo[272366]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 06:59:00 compute-0 sudo[272366]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 06:59:00 compute-0 sudo[272366]: pam_unix(sudo:session): session closed for user root
Nov 29 06:59:00 compute-0 podman[272258]: 2025-11-29 06:59:00.500057073 +0000 UTC m=+0.580307601 container exec_died c3c8680245c67f710ba1b448e2d4c77c4c02bc368d31276f0332ad942957e3cf (image=quay.io/ceph/ceph:v18, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mon-compute-0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True)
Nov 29 06:59:00 compute-0 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.15249 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 06:59:00 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Nov 29 06:59:00 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/871154065' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 29 06:59:00 compute-0 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.15261 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 06:59:01 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1397: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:59:01 compute-0 sshd-session[272275]: Invalid user terraria from 34.92.81.41 port 40340
Nov 29 06:59:01 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:59:01 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 06:59:01 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:59:01.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 06:59:01 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon stat"} v 0) v1
Nov 29 06:59:01 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3338702776' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Nov 29 06:59:01 compute-0 ceph-mon[74654]: from='client.15078 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 06:59:01 compute-0 ceph-mon[74654]: from='client.15093 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 06:59:01 compute-0 ceph-mon[74654]: from='client.? 192.168.122.102:0/1819404255' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 29 06:59:01 compute-0 ceph-mon[74654]: from='client.24944 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 06:59:01 compute-0 ceph-mon[74654]: from='client.? 192.168.122.101:0/3635354810' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 29 06:59:01 compute-0 ceph-mon[74654]: from='client.25021 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 06:59:01 compute-0 ceph-mon[74654]: from='client.? 192.168.122.100:0/1423480369' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 29 06:59:01 compute-0 ceph-mon[74654]: from='client.? 192.168.122.102:0/3266793758' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 29 06:59:01 compute-0 ceph-mon[74654]: from='client.24950 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 06:59:01 compute-0 ceph-mon[74654]: from='client.? 192.168.122.101:0/3429457872' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 29 06:59:01 compute-0 ceph-mon[74654]: from='client.25033 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 06:59:01 compute-0 ceph-mon[74654]: from='client.? 192.168.122.100:0/634521314' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 29 06:59:01 compute-0 ceph-mon[74654]: from='client.? 192.168.122.100:0/4117577449' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Nov 29 06:59:01 compute-0 ceph-mon[74654]: from='client.24962 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 06:59:01 compute-0 ceph-mon[74654]: pgmap v1394: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:59:01 compute-0 ceph-mon[74654]: from='client.25039 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 06:59:01 compute-0 ceph-mon[74654]: from='client.? 192.168.122.100:0/363659733' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 29 06:59:01 compute-0 ceph-mon[74654]: from='client.24968 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 29 06:59:01 compute-0 ceph-mon[74654]: from='client.25057 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 29 06:59:01 compute-0 ceph-mon[74654]: from='client.15147 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 06:59:01 compute-0 ceph-mon[74654]: from='client.15153 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 06:59:01 compute-0 ceph-mon[74654]: from='client.24983 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 06:59:01 compute-0 ceph-mon[74654]: from='client.25066 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 06:59:01 compute-0 ceph-mon[74654]: from='client.? 192.168.122.100:0/4186939961' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Nov 29 06:59:01 compute-0 ceph-mon[74654]: from='client.15168 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 06:59:01 compute-0 ceph-mon[74654]: from='client.? 192.168.122.100:0/3606582697' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Nov 29 06:59:01 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:59:01 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:59:01 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:59:01.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:59:01 compute-0 podman[272069]: 2025-11-29 06:59:01.442955389 +0000 UTC m=+3.397025496 container exec_died c3c8680245c67f710ba1b448e2d4c77c4c02bc368d31276f0332ad942957e3cf (image=quay.io/ceph/ceph:v18, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mon-compute-0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 29 06:59:01 compute-0 sshd-session[272275]: Received disconnect from 34.92.81.41 port 40340:11: Bye Bye [preauth]
Nov 29 06:59:01 compute-0 sshd-session[272275]: Disconnected from invalid user terraria 34.92.81.41 port 40340 [preauth]
Nov 29 06:59:02 compute-0 sshd-session[272631]: Received disconnect from 103.143.238.173 port 36978:11: Bye Bye [preauth]
Nov 29 06:59:02 compute-0 sshd-session[272631]: Disconnected from authenticating user root 103.143.238.173 port 36978 [preauth]
Nov 29 06:59:02 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:59:03 compute-0 sshd-session[272641]: Invalid user david from 162.214.92.14 port 40504
Nov 29 06:59:03 compute-0 sshd-session[272641]: Received disconnect from 162.214.92.14 port 40504:11: Bye Bye [preauth]
Nov 29 06:59:03 compute-0 sshd-session[272641]: Disconnected from invalid user david 162.214.92.14 port 40504 [preauth]
Nov 29 06:59:03 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1398: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:59:03 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:59:03 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:59:03 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:59:03.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:59:03 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:59:03 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:59:03 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:59:03.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:59:03 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 06:59:04 compute-0 podman[272678]: 2025-11-29 06:59:04.162695362 +0000 UTC m=+0.124365403 container health_status 81ea2bcb89266a0110a379c2083d8cc042460d4a35c8ed3bf349dd1083925000 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, config_id=ovn_metadata_agent)
Nov 29 06:59:04 compute-0 podman[272679]: 2025-11-29 06:59:04.185146286 +0000 UTC m=+0.139045682 container health_status b3f42e9a710907b47913576d27471d163da731262c1464357cff24681ce600c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 06:59:04 compute-0 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.15285 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 06:59:04 compute-0 ceph-mgr[74948]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 29 06:59:04 compute-0 ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-mgr-compute-0-vxabpq[74944]: 2025-11-29T06:59:04.348+0000 7f90f1cf5640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 29 06:59:04 compute-0 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.25231 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 06:59:04 compute-0 ceph-mon[74654]: from='client.? 192.168.122.101:0/1183724430' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 29 06:59:04 compute-0 ceph-mon[74654]: from='client.? 192.168.122.102:0/800220038' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 29 06:59:04 compute-0 ceph-mon[74654]: from='client.24989 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 29 06:59:04 compute-0 ceph-mon[74654]: from='client.25075 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 29 06:59:04 compute-0 ceph-mon[74654]: from='client.25081 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 06:59:04 compute-0 ceph-mon[74654]: from='client.25087 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 06:59:04 compute-0 ceph-mon[74654]: from='client.24998 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 06:59:04 compute-0 ceph-mon[74654]: from='client.15186 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 29 06:59:04 compute-0 ceph-mon[74654]: pgmap v1395: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:59:04 compute-0 ceph-mon[74654]: from='client.25096 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 06:59:04 compute-0 ceph-mon[74654]: from='client.25010 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 06:59:04 compute-0 ceph-mon[74654]: from='client.25105 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 06:59:04 compute-0 ceph-mon[74654]: from='client.25016 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 06:59:04 compute-0 ceph-mon[74654]: from='client.25117 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 06:59:04 compute-0 ceph-mon[74654]: from='client.25028 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 06:59:04 compute-0 ceph-mon[74654]: from='client.? 192.168.122.102:0/2561335456' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Nov 29 06:59:04 compute-0 ceph-mon[74654]: from='client.? 192.168.122.100:0/1523353195' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 29 06:59:04 compute-0 ceph-mon[74654]: pgmap v1396: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:59:04 compute-0 ceph-mon[74654]: from='client.15207 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 06:59:04 compute-0 ceph-mon[74654]: from='client.? 192.168.122.100:0/1840177804' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 29 06:59:04 compute-0 ceph-mon[74654]: from='client.? 192.168.122.101:0/3562221273' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Nov 29 06:59:04 compute-0 ceph-mon[74654]: from='client.? 192.168.122.101:0/2432053198' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 29 06:59:04 compute-0 ceph-mon[74654]: from='client.15219 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 29 06:59:04 compute-0 ceph-mon[74654]: from='client.? 192.168.122.102:0/2442087058' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 29 06:59:04 compute-0 ceph-mon[74654]: from='client.? 192.168.122.102:0/4281976021' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Nov 29 06:59:04 compute-0 ceph-mon[74654]: from='client.? 192.168.122.100:0/95145201' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 29 06:59:04 compute-0 ceph-mon[74654]: from='client.15231 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 06:59:04 compute-0 ceph-mon[74654]: from='client.? 192.168.122.100:0/2283979250' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 29 06:59:04 compute-0 ceph-mon[74654]: from='client.? 192.168.122.100:0/871154065' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 29 06:59:04 compute-0 ceph-mon[74654]: from='client.? 192.168.122.100:0/3338702776' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Nov 29 06:59:04 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:59:04 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "node ls"} v 0) v1
Nov 29 06:59:04 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/256323056' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Nov 29 06:59:04 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0) v1
Nov 29 06:59:04 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1258564348' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Nov 29 06:59:04 compute-0 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 06:59:04 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush class ls"} v 0) v1
Nov 29 06:59:04 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/398999456' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Nov 29 06:59:05 compute-0 ceph-mon[74654]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 06:59:05 compute-0 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.25145 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 06:59:05 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1399: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:59:05 compute-0 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.25252 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 06:59:05 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:59:05 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:59:05 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:59:05.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:59:05 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:59:05 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:59:05 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:59:05.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:59:05 compute-0 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.25154 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 06:59:05 compute-0 sshd-session[272899]: Invalid user support from 176.109.67.96 port 42150
Nov 29 06:59:05 compute-0 sshd-session[272899]: Received disconnect from 176.109.67.96 port 42150:11: Bye Bye [preauth]
Nov 29 06:59:05 compute-0 sshd-session[272899]: Disconnected from invalid user support 176.109.67.96 port 42150 [preauth]
Nov 29 06:59:05 compute-0 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.25267 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 06:59:05 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0) v1
Nov 29 06:59:05 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/543064333' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Nov 29 06:59:05 compute-0 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.25172 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 06:59:05 compute-0 podman[272900]: 2025-11-29 06:59:05.872520719 +0000 UTC m=+1.180436541 container exec f5b8edcc79df1f136246f04a71d5e10f6a214865dd4162430c1b6090267d988f (image=quay.io/ceph/haproxy:2.3, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-haproxy-rgw-default-compute-0-zzbnoj)
Nov 29 06:59:05 compute-0 ceph-mon[74654]: from='client.? 192.168.122.101:0/1177386484' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 29 06:59:05 compute-0 ceph-mon[74654]: from='client.? 192.168.122.102:0/4203124747' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Nov 29 06:59:05 compute-0 ceph-mon[74654]: from='client.? 192.168.122.101:0/255758300' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Nov 29 06:59:05 compute-0 ceph-mon[74654]: from='client.15249 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 06:59:05 compute-0 ceph-mon[74654]: from='client.? 192.168.122.102:0/2772849280' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 29 06:59:05 compute-0 ceph-mon[74654]: from='client.15261 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 06:59:05 compute-0 ceph-mon[74654]: pgmap v1397: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:59:05 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:59:05 compute-0 ceph-mon[74654]: from='client.? 192.168.122.102:0/1430961509' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Nov 29 06:59:05 compute-0 ceph-mon[74654]: from='client.? 192.168.122.102:0/3531646775' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Nov 29 06:59:05 compute-0 ceph-mon[74654]: from='client.? 192.168.122.101:0/269676052' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Nov 29 06:59:05 compute-0 ceph-mon[74654]: from='client.? 192.168.122.101:0/1938043130' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Nov 29 06:59:05 compute-0 ceph-mon[74654]: from='client.? 192.168.122.102:0/2078930364' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Nov 29 06:59:05 compute-0 ceph-mon[74654]: from='client.? 192.168.122.101:0/4256520886' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Nov 29 06:59:05 compute-0 ceph-mon[74654]: pgmap v1398: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:59:05 compute-0 ceph-mon[74654]: from='client.? 192.168.122.102:0/55082743' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Nov 29 06:59:05 compute-0 ceph-mon[74654]: from='client.? 192.168.122.102:0/2313769865' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Nov 29 06:59:05 compute-0 ceph-mon[74654]: from='client.? 192.168.122.101:0/585662946' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Nov 29 06:59:05 compute-0 ceph-mon[74654]: from='client.? 192.168.122.101:0/1903066086' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Nov 29 06:59:05 compute-0 ceph-mon[74654]: from='client.? 192.168.122.102:0/4055116638' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Nov 29 06:59:05 compute-0 ceph-mon[74654]: from='client.? 192.168.122.102:0/172091942' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Nov 29 06:59:05 compute-0 ceph-mon[74654]: from='client.? 192.168.122.101:0/3473434345' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Nov 29 06:59:05 compute-0 ceph-mon[74654]: from='client.? 192.168.122.101:0/1465321967' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Nov 29 06:59:05 compute-0 ceph-mon[74654]: from='client.15285 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 06:59:05 compute-0 ceph-mon[74654]: from='client.? 192.168.122.101:0/1200614993' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Nov 29 06:59:05 compute-0 ceph-mon[74654]: from='client.? 192.168.122.102:0/3044107192' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Nov 29 06:59:05 compute-0 ceph-mon[74654]: from='client.? 192.168.122.101:0/3267749641' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Nov 29 06:59:05 compute-0 ceph-mon[74654]: from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:59:05 compute-0 ceph-mon[74654]: from='client.? 192.168.122.100:0/256323056' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Nov 29 06:59:05 compute-0 ceph-mon[74654]: from='client.? 192.168.122.100:0/1258564348' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Nov 29 06:59:05 compute-0 ceph-mon[74654]: from='client.? 192.168.122.100:0/398999456' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Nov 29 06:59:05 compute-0 ceph-mon[74654]: from='client.? 192.168.122.102:0/3986255285' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Nov 29 06:59:05 compute-0 ceph-mon[74654]: from='client.? 192.168.122.101:0/46235445' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Nov 29 06:59:06 compute-0 podman[272987]: 2025-11-29 06:59:06.027103451 +0000 UTC m=+0.116861715 container exec_died f5b8edcc79df1f136246f04a71d5e10f6a214865dd4162430c1b6090267d988f (image=quay.io/ceph/haproxy:2.3, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-haproxy-rgw-default-compute-0-zzbnoj)
Nov 29 06:59:06 compute-0 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.25285 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 06:59:06 compute-0 podman[272900]: 2025-11-29 06:59:06.14052349 +0000 UTC m=+1.448439302 container exec_died f5b8edcc79df1f136246f04a71d5e10f6a214865dd4162430c1b6090267d988f (image=quay.io/ceph/haproxy:2.3, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-haproxy-rgw-default-compute-0-zzbnoj)
Nov 29 06:59:06 compute-0 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.25184 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 06:59:06 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0) v1
Nov 29 06:59:06 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3798126204' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Nov 29 06:59:06 compute-0 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.25297 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 06:59:06 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush rule ls"} v 0) v1
Nov 29 06:59:06 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1192370045' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Nov 29 06:59:06 compute-0 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.25202 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 06:59:06 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0) v1
Nov 29 06:59:06 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3118706975' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Nov 29 06:59:06 compute-0 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.25309 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 06:59:06 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0) v1
Nov 29 06:59:06 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/473274030' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Nov 29 06:59:07 compute-0 podman[273087]: 2025-11-29 06:59:07.07208021 +0000 UTC m=+0.641381426 container exec c5da9d8380f0eb7ca78841b66eaacc1789ab9c8fb67eaab27657426fdf51bade (image=quay.io/ceph/keepalived:2.2.4, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-keepalived-rgw-default-compute-0-uyqrbs, version=2.2.4, com.redhat.component=keepalived-container, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, description=keepalived for Ceph, io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, io.openshift.tags=Ceph keepalived, name=keepalived, io.openshift.expose-services=, summary=Provides keepalived on RHEL 9 for Ceph., architecture=x86_64, distribution-scope=public, build-date=2023-02-22T09:23:20)
Nov 29 06:59:07 compute-0 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.25214 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 06:59:07 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0) v1
Nov 29 06:59:07 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4218133949' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Nov 29 06:59:07 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1400: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:59:07 compute-0 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.25327 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 06:59:07 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:59:07 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:59:07 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:59:07.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:59:07 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0) v1
Nov 29 06:59:07 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2385854941' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Nov 29 06:59:07 compute-0 podman[273087]: 2025-11-29 06:59:07.432408653 +0000 UTC m=+1.001709789 container exec_died c5da9d8380f0eb7ca78841b66eaacc1789ab9c8fb67eaab27657426fdf51bade (image=quay.io/ceph/keepalived:2.2.4, name=ceph-336ec58c-893b-528f-a0c1-6ed1196bc047-keepalived-rgw-default-compute-0-uyqrbs, release=1793, vcs-type=git, io.openshift.tags=Ceph keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64, com.redhat.component=keepalived-container, distribution-scope=public, description=keepalived for Ceph, version=2.2.4, summary=Provides keepalived on RHEL 9 for Ceph., name=keepalived, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2)
Nov 29 06:59:07 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:59:07 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:59:07 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:59:07.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:59:07 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0) v1
Nov 29 06:59:07 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/365222304' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Nov 29 06:59:07 compute-0 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.25232 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 06:59:07 compute-0 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.25339 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 06:59:07 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0) v1
Nov 29 06:59:07 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2990571804' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Nov 29 06:59:08 compute-0 ceph-mon[74654]: from='client.25231 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 06:59:08 compute-0 ceph-mon[74654]: from='client.25237 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 06:59:08 compute-0 ceph-mon[74654]: from='client.25145 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 06:59:08 compute-0 ceph-mon[74654]: pgmap v1399: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:59:08 compute-0 ceph-mon[74654]: from='client.25252 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 06:59:08 compute-0 ceph-mon[74654]: from='client.? 192.168.122.102:0/1013829368' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Nov 29 06:59:08 compute-0 ceph-mon[74654]: from='client.25154 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 06:59:08 compute-0 ceph-mon[74654]: from='client.? 192.168.122.101:0/3349427670' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Nov 29 06:59:08 compute-0 ceph-mon[74654]: from='client.25267 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 06:59:08 compute-0 ceph-mon[74654]: from='client.? 192.168.122.102:0/2682987359' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Nov 29 06:59:08 compute-0 ceph-mon[74654]: from='client.? 192.168.122.100:0/543064333' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Nov 29 06:59:08 compute-0 ceph-mon[74654]: from='client.? 192.168.122.101:0/1171312674' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Nov 29 06:59:08 compute-0 ceph-mon[74654]: from='client.25172 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 06:59:08 compute-0 ceph-mon[74654]: from='client.25285 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 06:59:08 compute-0 ceph-mon[74654]: from='client.? 192.168.122.100:0/2398107138' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Nov 29 06:59:08 compute-0 ceph-mon[74654]: from='client.? 192.168.122.102:0/1657498584' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Nov 29 06:59:08 compute-0 ceph-mon[74654]: from='client.25184 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 06:59:08 compute-0 ceph-mon[74654]: from='client.? 192.168.122.100:0/3798126204' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Nov 29 06:59:08 compute-0 ceph-mon[74654]: from='client.? 192.168.122.101:0/4123701041' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Nov 29 06:59:08 compute-0 ceph-mon[74654]: from='client.? 192.168.122.102:0/3723082173' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 29 06:59:08 compute-0 ceph-mon[74654]: from='client.? 192.168.122.100:0/1192370045' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Nov 29 06:59:08 compute-0 ceph-mon[74654]: from='client.? 192.168.122.100:0/3118706975' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Nov 29 06:59:08 compute-0 ceph-mon[74654]: from='client.? 192.168.122.101:0/3137983449' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 29 06:59:08 compute-0 ceph-mon[74654]: from='client.? 192.168.122.102:0/1378579091' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Nov 29 06:59:08 compute-0 ceph-mon[74654]: from='client.? 192.168.122.100:0/473274030' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Nov 29 06:59:08 compute-0 sudo[271954]: pam_unix(sudo:session): session closed for user root
Nov 29 06:59:08 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 06:59:08 compute-0 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.25241 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 06:59:08 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Nov 29 06:59:08 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3500240129' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 29 06:59:08 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0) v1
Nov 29 06:59:08 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4288974213' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Nov 29 06:59:08 compute-0 ceph-mon[74654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/717556443' entity='mgr.compute-0.vxabpq' 
Nov 29 06:59:08 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 06:59:09 compute-0 ceph-mon[74654]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd utilization"} v 0) v1
Nov 29 06:59:09 compute-0 ceph-mon[74654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2160346123' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Nov 29 06:59:09 compute-0 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.25247 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 06:59:09 compute-0 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.15408 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 06:59:09 compute-0 ceph-mgr[74948]: log_channel(cluster) log [DBG] : pgmap v1401: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 06:59:09 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:59:09 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:59:09 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.102 - anonymous [29/Nov/2025:06:59:09.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:59:09 compute-0 radosgw[93592]: ====== starting new request req=0x7f7c891d26f0 =====
Nov 29 06:59:09 compute-0 radosgw[93592]: ====== req done req=0x7f7c891d26f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 06:59:09 compute-0 radosgw[93592]: beast: 0x7f7c891d26f0: 192.168.122.100 - anonymous [29/Nov/2025:06:59:09.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 06:59:09 compute-0 ceph-mgr[74948]: log_channel(audit) log [DBG] : from='client.15414 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
